00:00:00.001 Started by upstream project "autotest-per-patch" build number 132285 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.025 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.026 The recommended git tool is: git 00:00:00.027 using credential 00000000-0000-0000-0000-000000000002 00:00:00.029 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.044 Fetching changes from the remote Git repository 00:00:00.047 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.065 Using shallow fetch with depth 1 00:00:00.065 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.065 > git --version # timeout=10 00:00:00.090 > git --version # 'git version 2.39.2' 00:00:00.090 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.141 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.141 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.280 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.290 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.300 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:02.300 > git config core.sparsecheckout # timeout=10 00:00:02.311 > git read-tree -mu HEAD # timeout=10 00:00:02.326 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:02.342 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:02.342 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:02.429 [Pipeline] Start of Pipeline 00:00:02.443 [Pipeline] library 00:00:02.445 Loading library shm_lib@master 00:00:02.445 Library shm_lib@master is cached. Copying from home. 00:00:02.459 [Pipeline] node 00:00:02.467 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:02.469 [Pipeline] { 00:00:02.479 [Pipeline] catchError 00:00:02.481 [Pipeline] { 00:00:02.494 [Pipeline] wrap 00:00:02.502 [Pipeline] { 00:00:02.507 [Pipeline] stage 00:00:02.508 [Pipeline] { (Prologue) 00:00:02.519 [Pipeline] echo 00:00:02.520 Node: VM-host-WFP7 00:00:02.524 [Pipeline] cleanWs 00:00:02.531 [WS-CLEANUP] Deleting project workspace... 00:00:02.531 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.536 [WS-CLEANUP] done 00:00:02.709 [Pipeline] setCustomBuildProperty 00:00:02.794 [Pipeline] httpRequest 00:00:03.109 [Pipeline] echo 00:00:03.111 Sorcerer 10.211.164.20 is alive 00:00:03.120 [Pipeline] retry 00:00:03.122 [Pipeline] { 00:00:03.135 [Pipeline] httpRequest 00:00:03.140 HttpMethod: GET 00:00:03.141 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:03.141 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:03.142 Response Code: HTTP/1.1 200 OK 00:00:03.142 Success: Status code 200 is in the accepted range: 200,404 00:00:03.143 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:03.288 [Pipeline] } 00:00:03.304 [Pipeline] // retry 00:00:03.310 [Pipeline] sh 00:00:03.590 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:03.683 [Pipeline] httpRequest 00:00:04.245 [Pipeline] echo 00:00:04.247 Sorcerer 10.211.164.20 is alive 00:00:04.256 [Pipeline] retry 00:00:04.258 [Pipeline] { 00:00:04.270 [Pipeline] httpRequest 00:00:04.274 HttpMethod: GET 00:00:04.275 URL: http://10.211.164.20/packages/spdk_318515b44ec8b67f83bcc9ca83f0c7d5ea919e62.tar.gz 00:00:04.276 Sending request to url: http://10.211.164.20/packages/spdk_318515b44ec8b67f83bcc9ca83f0c7d5ea919e62.tar.gz 00:00:04.277 Response Code: HTTP/1.1 200 OK 00:00:04.277 Success: Status code 200 is in the accepted range: 200,404 00:00:04.278 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_318515b44ec8b67f83bcc9ca83f0c7d5ea919e62.tar.gz 00:00:18.350 [Pipeline] } 00:00:18.371 [Pipeline] // retry 00:00:18.379 [Pipeline] sh 00:00:18.662 + tar --no-same-owner -xf spdk_318515b44ec8b67f83bcc9ca83f0c7d5ea919e62.tar.gz 00:00:21.211 [Pipeline] sh 00:00:21.494 + git -C spdk log --oneline -n5 00:00:21.494 318515b44 nvme/perf: interrupt mode support for pcie controller 00:00:21.494 7bc1134d6 test/scheduler: Read PID's status file only once 00:00:21.494 0b65bb478 test/scheduler: Account for multiple cpus in the affinity mask 00:00:21.494 a96685099 test/nvmf: Tweak nvme_connect() 00:00:21.494 90486f7e8 accel/dpdk_compressdev: Use the proper spdk_free function in error path 00:00:21.522 [Pipeline] writeFile 00:00:21.546 [Pipeline] sh 00:00:21.844 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:21.857 [Pipeline] sh 00:00:22.141 + cat autorun-spdk.conf 00:00:22.141 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:22.141 SPDK_RUN_ASAN=1 00:00:22.141 SPDK_RUN_UBSAN=1 00:00:22.141 SPDK_TEST_RAID=1 00:00:22.141 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:22.148 RUN_NIGHTLY=0 00:00:22.150 [Pipeline] } 00:00:22.164 [Pipeline] // stage 00:00:22.179 [Pipeline] stage 00:00:22.181 [Pipeline] { (Run VM) 00:00:22.194 [Pipeline] sh 00:00:22.482 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:22.482 + echo 'Start stage prepare_nvme.sh' 00:00:22.482 Start stage prepare_nvme.sh 00:00:22.482 + [[ -n 2 ]] 00:00:22.482 + disk_prefix=ex2 00:00:22.482 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:00:22.482 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:00:22.482 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:00:22.482 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:22.482 ++ SPDK_RUN_ASAN=1 00:00:22.482 ++ SPDK_RUN_UBSAN=1 00:00:22.482 ++ SPDK_TEST_RAID=1 00:00:22.482 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:22.482 ++ RUN_NIGHTLY=0 00:00:22.482 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:00:22.482 + nvme_files=() 00:00:22.482 + declare -A nvme_files 00:00:22.482 + backend_dir=/var/lib/libvirt/images/backends 00:00:22.482 + nvme_files['nvme.img']=5G 00:00:22.482 + nvme_files['nvme-cmb.img']=5G 00:00:22.482 + nvme_files['nvme-multi0.img']=4G 00:00:22.482 + nvme_files['nvme-multi1.img']=4G 00:00:22.482 + nvme_files['nvme-multi2.img']=4G 00:00:22.482 + nvme_files['nvme-openstack.img']=8G 00:00:22.482 + nvme_files['nvme-zns.img']=5G 00:00:22.482 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:22.482 + (( SPDK_TEST_FTL == 1 )) 00:00:22.482 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:22.482 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:22.482 + for nvme in "${!nvme_files[@]}" 00:00:22.482 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:00:22.482 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:22.482 + for nvme in "${!nvme_files[@]}" 00:00:22.482 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:00:22.482 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:22.482 + for nvme in "${!nvme_files[@]}" 00:00:22.482 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:00:22.482 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:22.482 + for nvme in "${!nvme_files[@]}" 00:00:22.482 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:00:22.482 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:22.482 + for nvme in "${!nvme_files[@]}" 00:00:22.482 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:00:22.482 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:22.482 + for nvme in "${!nvme_files[@]}" 00:00:22.482 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:00:22.482 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:22.482 + for nvme in "${!nvme_files[@]}" 00:00:22.482 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:00:22.742 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:22.742 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:00:22.742 + echo 'End stage prepare_nvme.sh' 00:00:22.742 End stage prepare_nvme.sh 00:00:22.754 [Pipeline] sh 00:00:23.039 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:23.039 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:00:23.039 00:00:23.039 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:00:23.039 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:00:23.039 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:00:23.039 HELP=0 00:00:23.039 DRY_RUN=0 00:00:23.039 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:00:23.039 NVME_DISKS_TYPE=nvme,nvme, 00:00:23.039 NVME_AUTO_CREATE=0 00:00:23.039 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:00:23.039 NVME_CMB=,, 00:00:23.039 NVME_PMR=,, 00:00:23.039 NVME_ZNS=,, 00:00:23.039 NVME_MS=,, 00:00:23.039 NVME_FDP=,, 00:00:23.039 SPDK_VAGRANT_DISTRO=fedora39 00:00:23.039 SPDK_VAGRANT_VMCPU=10 00:00:23.039 SPDK_VAGRANT_VMRAM=12288 00:00:23.039 SPDK_VAGRANT_PROVIDER=libvirt 00:00:23.039 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:23.039 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:23.039 SPDK_OPENSTACK_NETWORK=0 00:00:23.039 VAGRANT_PACKAGE_BOX=0 00:00:23.039 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:23.039 FORCE_DISTRO=true 00:00:23.039 VAGRANT_BOX_VERSION= 00:00:23.039 EXTRA_VAGRANTFILES= 00:00:23.039 NIC_MODEL=virtio 00:00:23.039 00:00:23.039 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:00:23.039 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:00:25.578 Bringing machine 'default' up with 'libvirt' provider... 00:00:25.838 ==> default: Creating image (snapshot of base box volume). 00:00:25.838 ==> default: Creating domain with the following settings... 00:00:25.838 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731662353_b2efd30b60330b456f3e 00:00:25.838 ==> default: -- Domain type: kvm 00:00:25.838 ==> default: -- Cpus: 10 00:00:25.838 ==> default: -- Feature: acpi 00:00:25.838 ==> default: -- Feature: apic 00:00:25.838 ==> default: -- Feature: pae 00:00:25.838 ==> default: -- Memory: 12288M 00:00:25.838 ==> default: -- Memory Backing: hugepages: 00:00:25.838 ==> default: -- Management MAC: 00:00:25.838 ==> default: -- Loader: 00:00:25.838 ==> default: -- Nvram: 00:00:25.838 ==> default: -- Base box: spdk/fedora39 00:00:25.838 ==> default: -- Storage pool: default 00:00:25.838 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731662353_b2efd30b60330b456f3e.img (20G) 00:00:25.838 ==> default: -- Volume Cache: default 00:00:25.838 ==> default: -- Kernel: 00:00:25.838 ==> default: -- Initrd: 00:00:25.838 ==> default: -- Graphics Type: vnc 00:00:25.838 ==> default: -- Graphics Port: -1 00:00:25.838 ==> default: -- Graphics IP: 127.0.0.1 00:00:25.838 ==> default: -- Graphics Password: Not defined 00:00:25.838 ==> default: -- Video Type: cirrus 00:00:25.838 ==> default: -- Video VRAM: 9216 00:00:25.838 ==> default: -- Sound Type: 00:00:25.838 ==> default: -- Keymap: en-us 00:00:25.838 ==> default: -- TPM Path: 00:00:25.838 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:25.838 ==> default: -- Command line args: 00:00:25.838 ==> default: -> value=-device, 00:00:25.838 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:25.838 ==> default: -> value=-drive, 00:00:25.838 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:00:25.838 ==> default: -> value=-device, 00:00:25.838 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:25.838 ==> default: -> value=-device, 00:00:25.838 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:25.838 ==> default: -> value=-drive, 00:00:25.838 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:25.838 ==> default: -> value=-device, 00:00:25.838 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:25.838 ==> default: -> value=-drive, 00:00:25.838 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:25.838 ==> default: -> value=-device, 00:00:25.838 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:25.838 ==> default: -> value=-drive, 00:00:25.838 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:25.838 ==> default: -> value=-device, 00:00:25.838 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:26.097 ==> default: Creating shared folders metadata... 00:00:26.097 ==> default: Starting domain. 00:00:27.474 ==> default: Waiting for domain to get an IP address... 00:00:45.629 ==> default: Waiting for SSH to become available... 00:00:45.629 ==> default: Configuring and enabling network interfaces... 00:00:50.906 default: SSH address: 192.168.121.104:22 00:00:50.906 default: SSH username: vagrant 00:00:50.906 default: SSH auth method: private key 00:00:53.460 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:01.584 ==> default: Mounting SSHFS shared folder... 00:01:04.154 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:04.154 ==> default: Checking Mount.. 00:01:05.528 ==> default: Folder Successfully Mounted! 00:01:05.528 ==> default: Running provisioner: file... 00:01:06.463 default: ~/.gitconfig => .gitconfig 00:01:07.032 00:01:07.032 SUCCESS! 00:01:07.032 00:01:07.032 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:07.032 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:07.032 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:07.032 00:01:07.042 [Pipeline] } 00:01:07.058 [Pipeline] // stage 00:01:07.067 [Pipeline] dir 00:01:07.068 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:01:07.069 [Pipeline] { 00:01:07.083 [Pipeline] catchError 00:01:07.085 [Pipeline] { 00:01:07.098 [Pipeline] sh 00:01:07.403 + vagrant+ ssh-config --host vagrant 00:01:07.404 sed -ne /^Host/,$p 00:01:07.404 + tee ssh_conf 00:01:09.943 Host vagrant 00:01:09.943 HostName 192.168.121.104 00:01:09.943 User vagrant 00:01:09.943 Port 22 00:01:09.943 UserKnownHostsFile /dev/null 00:01:09.943 StrictHostKeyChecking no 00:01:09.943 PasswordAuthentication no 00:01:09.943 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:09.943 IdentitiesOnly yes 00:01:09.943 LogLevel FATAL 00:01:09.943 ForwardAgent yes 00:01:09.943 ForwardX11 yes 00:01:09.943 00:01:09.957 [Pipeline] withEnv 00:01:09.960 [Pipeline] { 00:01:09.973 [Pipeline] sh 00:01:10.257 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:10.257 source /etc/os-release 00:01:10.257 [[ -e /image.version ]] && img=$(< /image.version) 00:01:10.257 # Minimal, systemd-like check. 00:01:10.257 if [[ -e /.dockerenv ]]; then 00:01:10.257 # Clear garbage from the node's name: 00:01:10.257 # agt-er_autotest_547-896 -> autotest_547-896 00:01:10.257 # $HOSTNAME is the actual container id 00:01:10.257 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:10.257 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:10.257 # We can assume this is a mount from a host where container is running, 00:01:10.257 # so fetch its hostname to easily identify the target swarm worker. 00:01:10.257 container="$(< /etc/hostname) ($agent)" 00:01:10.257 else 00:01:10.257 # Fallback 00:01:10.257 container=$agent 00:01:10.257 fi 00:01:10.257 fi 00:01:10.257 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:10.257 00:01:10.529 [Pipeline] } 00:01:10.545 [Pipeline] // withEnv 00:01:10.555 [Pipeline] setCustomBuildProperty 00:01:10.571 [Pipeline] stage 00:01:10.574 [Pipeline] { (Tests) 00:01:10.592 [Pipeline] sh 00:01:10.919 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:11.192 [Pipeline] sh 00:01:11.476 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:11.747 [Pipeline] timeout 00:01:11.748 Timeout set to expire in 1 hr 30 min 00:01:11.749 [Pipeline] { 00:01:11.763 [Pipeline] sh 00:01:12.092 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:12.657 HEAD is now at 318515b44 nvme/perf: interrupt mode support for pcie controller 00:01:12.668 [Pipeline] sh 00:01:12.946 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:13.219 [Pipeline] sh 00:01:13.504 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:13.777 [Pipeline] sh 00:01:14.057 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:14.316 ++ readlink -f spdk_repo 00:01:14.316 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:14.316 + [[ -n /home/vagrant/spdk_repo ]] 00:01:14.316 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:14.316 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:14.316 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:14.316 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:14.316 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:14.316 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:14.316 + cd /home/vagrant/spdk_repo 00:01:14.316 + source /etc/os-release 00:01:14.316 ++ NAME='Fedora Linux' 00:01:14.316 ++ VERSION='39 (Cloud Edition)' 00:01:14.316 ++ ID=fedora 00:01:14.316 ++ VERSION_ID=39 00:01:14.316 ++ VERSION_CODENAME= 00:01:14.316 ++ PLATFORM_ID=platform:f39 00:01:14.316 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:14.316 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:14.316 ++ LOGO=fedora-logo-icon 00:01:14.316 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:14.316 ++ HOME_URL=https://fedoraproject.org/ 00:01:14.316 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:14.316 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:14.316 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:14.316 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:14.316 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:14.316 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:14.316 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:14.316 ++ SUPPORT_END=2024-11-12 00:01:14.316 ++ VARIANT='Cloud Edition' 00:01:14.316 ++ VARIANT_ID=cloud 00:01:14.316 + uname -a 00:01:14.316 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:14.316 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:14.886 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:14.886 Hugepages 00:01:14.886 node hugesize free / total 00:01:14.886 node0 1048576kB 0 / 0 00:01:14.886 node0 2048kB 0 / 0 00:01:14.886 00:01:14.886 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:14.886 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:14.886 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:14.886 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:14.886 + rm -f /tmp/spdk-ld-path 00:01:14.886 + source autorun-spdk.conf 00:01:14.886 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.886 ++ SPDK_RUN_ASAN=1 00:01:14.886 ++ SPDK_RUN_UBSAN=1 00:01:14.886 ++ SPDK_TEST_RAID=1 00:01:14.886 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:14.886 ++ RUN_NIGHTLY=0 00:01:14.886 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:14.886 + [[ -n '' ]] 00:01:14.886 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:14.886 + for M in /var/spdk/build-*-manifest.txt 00:01:14.886 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:14.886 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:14.886 + for M in /var/spdk/build-*-manifest.txt 00:01:14.886 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:14.886 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:14.886 + for M in /var/spdk/build-*-manifest.txt 00:01:14.886 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:14.886 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:14.886 ++ uname 00:01:15.145 + [[ Linux == \L\i\n\u\x ]] 00:01:15.145 + sudo dmesg -T 00:01:15.145 + sudo dmesg --clear 00:01:15.145 + dmesg_pid=5429 00:01:15.145 + [[ Fedora Linux == FreeBSD ]] 00:01:15.145 + sudo dmesg -Tw 00:01:15.145 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.145 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.145 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:15.145 + [[ -x /usr/src/fio-static/fio ]] 00:01:15.145 + export FIO_BIN=/usr/src/fio-static/fio 00:01:15.145 + FIO_BIN=/usr/src/fio-static/fio 00:01:15.145 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:15.145 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:15.145 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:15.145 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.145 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.146 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:15.146 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.146 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.146 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:15.146 09:20:03 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:15.146 09:20:03 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:15.146 09:20:03 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.146 09:20:03 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:15.146 09:20:03 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:15.146 09:20:03 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:15.146 09:20:03 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:15.146 09:20:03 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:15.146 09:20:03 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:15.146 09:20:03 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:15.405 09:20:03 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:15.405 09:20:03 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:15.405 09:20:03 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:15.405 09:20:03 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:15.405 09:20:03 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:15.406 09:20:03 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:15.406 09:20:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.406 09:20:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.406 09:20:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.406 09:20:03 -- paths/export.sh@5 -- $ export PATH 00:01:15.406 09:20:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.406 09:20:03 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:15.406 09:20:03 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:15.406 09:20:03 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731662403.XXXXXX 00:01:15.406 09:20:03 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731662403.16H99f 00:01:15.406 09:20:03 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:15.406 09:20:03 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:15.406 09:20:03 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:15.406 09:20:03 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:15.406 09:20:03 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:15.406 09:20:03 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:15.406 09:20:03 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:15.406 09:20:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.406 09:20:03 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:15.406 09:20:03 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:15.406 09:20:03 -- pm/common@17 -- $ local monitor 00:01:15.406 09:20:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.406 09:20:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.406 09:20:03 -- pm/common@25 -- $ sleep 1 00:01:15.406 09:20:03 -- pm/common@21 -- $ date +%s 00:01:15.406 09:20:03 -- pm/common@21 -- $ date +%s 00:01:15.406 09:20:03 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731662403 00:01:15.406 09:20:03 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731662403 00:01:15.406 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731662403_collect-cpu-load.pm.log 00:01:15.406 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731662403_collect-vmstat.pm.log 00:01:16.345 09:20:04 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:16.345 09:20:04 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:16.345 09:20:04 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:16.345 09:20:04 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:16.345 09:20:04 -- spdk/autobuild.sh@16 -- $ date -u 00:01:16.345 Fri Nov 15 09:20:04 AM UTC 2024 00:01:16.346 09:20:04 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:16.346 v25.01-pre-185-g318515b44 00:01:16.346 09:20:04 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:16.346 09:20:04 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:16.346 09:20:04 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:16.346 09:20:04 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:16.346 09:20:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.346 ************************************ 00:01:16.346 START TEST asan 00:01:16.346 ************************************ 00:01:16.346 using asan 00:01:16.346 09:20:04 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:01:16.346 00:01:16.346 real 0m0.001s 00:01:16.346 user 0m0.000s 00:01:16.346 sys 0m0.000s 00:01:16.346 09:20:04 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:16.346 09:20:04 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:16.346 ************************************ 00:01:16.346 END TEST asan 00:01:16.346 ************************************ 00:01:16.346 09:20:04 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:16.346 09:20:04 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:16.346 09:20:04 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:16.346 09:20:04 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:16.346 09:20:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.346 ************************************ 00:01:16.346 START TEST ubsan 00:01:16.346 ************************************ 00:01:16.346 using ubsan 00:01:16.346 09:20:04 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:16.346 00:01:16.346 real 0m0.000s 00:01:16.346 user 0m0.000s 00:01:16.346 sys 0m0.000s 00:01:16.346 09:20:04 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:16.346 09:20:04 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:16.346 ************************************ 00:01:16.346 END TEST ubsan 00:01:16.346 ************************************ 00:01:16.605 09:20:04 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:16.605 09:20:04 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:16.605 09:20:04 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:16.605 09:20:04 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:16.605 09:20:04 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:16.605 09:20:04 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:16.605 09:20:04 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:16.605 09:20:04 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:16.605 09:20:04 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:16.605 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:16.605 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:17.175 Using 'verbs' RDMA provider 00:01:33.115 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:48.073 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:48.333 Creating mk/config.mk...done. 00:01:48.333 Creating mk/cc.flags.mk...done. 00:01:48.333 Type 'make' to build. 00:01:48.333 09:20:36 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:48.333 09:20:36 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:48.333 09:20:36 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:48.333 09:20:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.333 ************************************ 00:01:48.333 START TEST make 00:01:48.333 ************************************ 00:01:48.333 09:20:36 make -- common/autotest_common.sh@1127 -- $ make -j10 00:01:48.923 make[1]: Nothing to be done for 'all'. 00:01:58.912 The Meson build system 00:01:58.912 Version: 1.5.0 00:01:58.912 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:58.912 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:58.912 Build type: native build 00:01:58.912 Program cat found: YES (/usr/bin/cat) 00:01:58.912 Project name: DPDK 00:01:58.912 Project version: 24.03.0 00:01:58.912 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:58.912 C linker for the host machine: cc ld.bfd 2.40-14 00:01:58.912 Host machine cpu family: x86_64 00:01:58.912 Host machine cpu: x86_64 00:01:58.912 Message: ## Building in Developer Mode ## 00:01:58.912 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:58.912 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:58.912 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:58.912 Program python3 found: YES (/usr/bin/python3) 00:01:58.912 Program cat found: YES (/usr/bin/cat) 00:01:58.912 Compiler for C supports arguments -march=native: YES 00:01:58.912 Checking for size of "void *" : 8 00:01:58.912 Checking for size of "void *" : 8 (cached) 00:01:58.912 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:58.912 Library m found: YES 00:01:58.912 Library numa found: YES 00:01:58.912 Has header "numaif.h" : YES 00:01:58.912 Library fdt found: NO 00:01:58.912 Library execinfo found: NO 00:01:58.912 Has header "execinfo.h" : YES 00:01:58.912 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:58.912 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:58.912 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:58.912 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:58.912 Run-time dependency openssl found: YES 3.1.1 00:01:58.912 Run-time dependency libpcap found: YES 1.10.4 00:01:58.912 Has header "pcap.h" with dependency libpcap: YES 00:01:58.912 Compiler for C supports arguments -Wcast-qual: YES 00:01:58.912 Compiler for C supports arguments -Wdeprecated: YES 00:01:58.912 Compiler for C supports arguments -Wformat: YES 00:01:58.912 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:58.912 Compiler for C supports arguments -Wformat-security: NO 00:01:58.912 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:58.912 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:58.912 Compiler for C supports arguments -Wnested-externs: YES 00:01:58.912 Compiler for C supports arguments -Wold-style-definition: YES 00:01:58.912 Compiler for C supports arguments -Wpointer-arith: YES 00:01:58.912 Compiler for C supports arguments -Wsign-compare: YES 00:01:58.912 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:58.912 Compiler for C supports arguments -Wundef: YES 00:01:58.912 Compiler for C supports arguments -Wwrite-strings: YES 00:01:58.912 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:58.912 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:58.912 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:58.912 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:58.912 Program objdump found: YES (/usr/bin/objdump) 00:01:58.912 Compiler for C supports arguments -mavx512f: YES 00:01:58.912 Checking if "AVX512 checking" compiles: YES 00:01:58.912 Fetching value of define "__SSE4_2__" : 1 00:01:58.912 Fetching value of define "__AES__" : 1 00:01:58.912 Fetching value of define "__AVX__" : 1 00:01:58.912 Fetching value of define "__AVX2__" : 1 00:01:58.912 Fetching value of define "__AVX512BW__" : 1 00:01:58.912 Fetching value of define "__AVX512CD__" : 1 00:01:58.912 Fetching value of define "__AVX512DQ__" : 1 00:01:58.912 Fetching value of define "__AVX512F__" : 1 00:01:58.912 Fetching value of define "__AVX512VL__" : 1 00:01:58.912 Fetching value of define "__PCLMUL__" : 1 00:01:58.912 Fetching value of define "__RDRND__" : 1 00:01:58.912 Fetching value of define "__RDSEED__" : 1 00:01:58.912 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:58.912 Fetching value of define "__znver1__" : (undefined) 00:01:58.912 Fetching value of define "__znver2__" : (undefined) 00:01:58.912 Fetching value of define "__znver3__" : (undefined) 00:01:58.912 Fetching value of define "__znver4__" : (undefined) 00:01:58.912 Library asan found: YES 00:01:58.912 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:58.912 Message: lib/log: Defining dependency "log" 00:01:58.912 Message: lib/kvargs: Defining dependency "kvargs" 00:01:58.912 Message: lib/telemetry: Defining dependency "telemetry" 00:01:58.912 Library rt found: YES 00:01:58.912 Checking for function "getentropy" : NO 00:01:58.912 Message: lib/eal: Defining dependency "eal" 00:01:58.912 Message: lib/ring: Defining dependency "ring" 00:01:58.912 Message: lib/rcu: Defining dependency "rcu" 00:01:58.912 Message: lib/mempool: Defining dependency "mempool" 00:01:58.912 Message: lib/mbuf: Defining dependency "mbuf" 00:01:58.912 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:58.912 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:58.912 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:58.912 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:58.912 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:58.912 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:58.912 Compiler for C supports arguments -mpclmul: YES 00:01:58.912 Compiler for C supports arguments -maes: YES 00:01:58.912 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:58.912 Compiler for C supports arguments -mavx512bw: YES 00:01:58.912 Compiler for C supports arguments -mavx512dq: YES 00:01:58.912 Compiler for C supports arguments -mavx512vl: YES 00:01:58.912 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:58.912 Compiler for C supports arguments -mavx2: YES 00:01:58.912 Compiler for C supports arguments -mavx: YES 00:01:58.912 Message: lib/net: Defining dependency "net" 00:01:58.912 Message: lib/meter: Defining dependency "meter" 00:01:58.912 Message: lib/ethdev: Defining dependency "ethdev" 00:01:58.912 Message: lib/pci: Defining dependency "pci" 00:01:58.912 Message: lib/cmdline: Defining dependency "cmdline" 00:01:58.912 Message: lib/hash: Defining dependency "hash" 00:01:58.912 Message: lib/timer: Defining dependency "timer" 00:01:58.912 Message: lib/compressdev: Defining dependency "compressdev" 00:01:58.912 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:58.912 Message: lib/dmadev: Defining dependency "dmadev" 00:01:58.912 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:58.912 Message: lib/power: Defining dependency "power" 00:01:58.912 Message: lib/reorder: Defining dependency "reorder" 00:01:58.912 Message: lib/security: Defining dependency "security" 00:01:58.912 Has header "linux/userfaultfd.h" : YES 00:01:58.912 Has header "linux/vduse.h" : YES 00:01:58.912 Message: lib/vhost: Defining dependency "vhost" 00:01:58.912 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:58.912 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:58.912 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:58.912 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:58.912 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:58.912 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:58.912 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:58.912 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:58.912 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:58.912 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:58.912 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:58.912 Configuring doxy-api-html.conf using configuration 00:01:58.912 Configuring doxy-api-man.conf using configuration 00:01:58.912 Program mandb found: YES (/usr/bin/mandb) 00:01:58.912 Program sphinx-build found: NO 00:01:58.912 Configuring rte_build_config.h using configuration 00:01:58.912 Message: 00:01:58.912 ================= 00:01:58.912 Applications Enabled 00:01:58.912 ================= 00:01:58.912 00:01:58.912 apps: 00:01:58.912 00:01:58.912 00:01:58.912 Message: 00:01:58.912 ================= 00:01:58.912 Libraries Enabled 00:01:58.912 ================= 00:01:58.912 00:01:58.912 libs: 00:01:58.912 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:58.912 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:58.912 cryptodev, dmadev, power, reorder, security, vhost, 00:01:58.912 00:01:58.913 Message: 00:01:58.913 =============== 00:01:58.913 Drivers Enabled 00:01:58.913 =============== 00:01:58.913 00:01:58.913 common: 00:01:58.913 00:01:58.913 bus: 00:01:58.913 pci, vdev, 00:01:58.913 mempool: 00:01:58.913 ring, 00:01:58.913 dma: 00:01:58.913 00:01:58.913 net: 00:01:58.913 00:01:58.913 crypto: 00:01:58.913 00:01:58.913 compress: 00:01:58.913 00:01:58.913 vdpa: 00:01:58.913 00:01:58.913 00:01:58.913 Message: 00:01:58.913 ================= 00:01:58.913 Content Skipped 00:01:58.913 ================= 00:01:58.913 00:01:58.913 apps: 00:01:58.913 dumpcap: explicitly disabled via build config 00:01:58.913 graph: explicitly disabled via build config 00:01:58.913 pdump: explicitly disabled via build config 00:01:58.913 proc-info: explicitly disabled via build config 00:01:58.913 test-acl: explicitly disabled via build config 00:01:58.913 test-bbdev: explicitly disabled via build config 00:01:58.913 test-cmdline: explicitly disabled via build config 00:01:58.913 test-compress-perf: explicitly disabled via build config 00:01:58.913 test-crypto-perf: explicitly disabled via build config 00:01:58.913 test-dma-perf: explicitly disabled via build config 00:01:58.913 test-eventdev: explicitly disabled via build config 00:01:58.913 test-fib: explicitly disabled via build config 00:01:58.913 test-flow-perf: explicitly disabled via build config 00:01:58.913 test-gpudev: explicitly disabled via build config 00:01:58.913 test-mldev: explicitly disabled via build config 00:01:58.913 test-pipeline: explicitly disabled via build config 00:01:58.913 test-pmd: explicitly disabled via build config 00:01:58.913 test-regex: explicitly disabled via build config 00:01:58.913 test-sad: explicitly disabled via build config 00:01:58.913 test-security-perf: explicitly disabled via build config 00:01:58.913 00:01:58.913 libs: 00:01:58.913 argparse: explicitly disabled via build config 00:01:58.913 metrics: explicitly disabled via build config 00:01:58.913 acl: explicitly disabled via build config 00:01:58.913 bbdev: explicitly disabled via build config 00:01:58.913 bitratestats: explicitly disabled via build config 00:01:58.913 bpf: explicitly disabled via build config 00:01:58.913 cfgfile: explicitly disabled via build config 00:01:58.913 distributor: explicitly disabled via build config 00:01:58.913 efd: explicitly disabled via build config 00:01:58.913 eventdev: explicitly disabled via build config 00:01:58.913 dispatcher: explicitly disabled via build config 00:01:58.913 gpudev: explicitly disabled via build config 00:01:58.913 gro: explicitly disabled via build config 00:01:58.913 gso: explicitly disabled via build config 00:01:58.913 ip_frag: explicitly disabled via build config 00:01:58.913 jobstats: explicitly disabled via build config 00:01:58.913 latencystats: explicitly disabled via build config 00:01:58.913 lpm: explicitly disabled via build config 00:01:58.913 member: explicitly disabled via build config 00:01:58.913 pcapng: explicitly disabled via build config 00:01:58.913 rawdev: explicitly disabled via build config 00:01:58.913 regexdev: explicitly disabled via build config 00:01:58.913 mldev: explicitly disabled via build config 00:01:58.913 rib: explicitly disabled via build config 00:01:58.913 sched: explicitly disabled via build config 00:01:58.913 stack: explicitly disabled via build config 00:01:58.913 ipsec: explicitly disabled via build config 00:01:58.913 pdcp: explicitly disabled via build config 00:01:58.913 fib: explicitly disabled via build config 00:01:58.913 port: explicitly disabled via build config 00:01:58.913 pdump: explicitly disabled via build config 00:01:58.913 table: explicitly disabled via build config 00:01:58.913 pipeline: explicitly disabled via build config 00:01:58.913 graph: explicitly disabled via build config 00:01:58.913 node: explicitly disabled via build config 00:01:58.913 00:01:58.913 drivers: 00:01:58.913 common/cpt: not in enabled drivers build config 00:01:58.913 common/dpaax: not in enabled drivers build config 00:01:58.913 common/iavf: not in enabled drivers build config 00:01:58.913 common/idpf: not in enabled drivers build config 00:01:58.913 common/ionic: not in enabled drivers build config 00:01:58.913 common/mvep: not in enabled drivers build config 00:01:58.913 common/octeontx: not in enabled drivers build config 00:01:58.913 bus/auxiliary: not in enabled drivers build config 00:01:58.913 bus/cdx: not in enabled drivers build config 00:01:58.913 bus/dpaa: not in enabled drivers build config 00:01:58.913 bus/fslmc: not in enabled drivers build config 00:01:58.913 bus/ifpga: not in enabled drivers build config 00:01:58.913 bus/platform: not in enabled drivers build config 00:01:58.913 bus/uacce: not in enabled drivers build config 00:01:58.913 bus/vmbus: not in enabled drivers build config 00:01:58.913 common/cnxk: not in enabled drivers build config 00:01:58.913 common/mlx5: not in enabled drivers build config 00:01:58.913 common/nfp: not in enabled drivers build config 00:01:58.913 common/nitrox: not in enabled drivers build config 00:01:58.913 common/qat: not in enabled drivers build config 00:01:58.913 common/sfc_efx: not in enabled drivers build config 00:01:58.913 mempool/bucket: not in enabled drivers build config 00:01:58.913 mempool/cnxk: not in enabled drivers build config 00:01:58.913 mempool/dpaa: not in enabled drivers build config 00:01:58.913 mempool/dpaa2: not in enabled drivers build config 00:01:58.913 mempool/octeontx: not in enabled drivers build config 00:01:58.913 mempool/stack: not in enabled drivers build config 00:01:58.913 dma/cnxk: not in enabled drivers build config 00:01:58.913 dma/dpaa: not in enabled drivers build config 00:01:58.913 dma/dpaa2: not in enabled drivers build config 00:01:58.913 dma/hisilicon: not in enabled drivers build config 00:01:58.913 dma/idxd: not in enabled drivers build config 00:01:58.913 dma/ioat: not in enabled drivers build config 00:01:58.913 dma/skeleton: not in enabled drivers build config 00:01:58.913 net/af_packet: not in enabled drivers build config 00:01:58.913 net/af_xdp: not in enabled drivers build config 00:01:58.913 net/ark: not in enabled drivers build config 00:01:58.913 net/atlantic: not in enabled drivers build config 00:01:58.913 net/avp: not in enabled drivers build config 00:01:58.913 net/axgbe: not in enabled drivers build config 00:01:58.913 net/bnx2x: not in enabled drivers build config 00:01:58.913 net/bnxt: not in enabled drivers build config 00:01:58.913 net/bonding: not in enabled drivers build config 00:01:58.913 net/cnxk: not in enabled drivers build config 00:01:58.913 net/cpfl: not in enabled drivers build config 00:01:58.913 net/cxgbe: not in enabled drivers build config 00:01:58.913 net/dpaa: not in enabled drivers build config 00:01:58.913 net/dpaa2: not in enabled drivers build config 00:01:58.913 net/e1000: not in enabled drivers build config 00:01:58.913 net/ena: not in enabled drivers build config 00:01:58.913 net/enetc: not in enabled drivers build config 00:01:58.913 net/enetfec: not in enabled drivers build config 00:01:58.913 net/enic: not in enabled drivers build config 00:01:58.913 net/failsafe: not in enabled drivers build config 00:01:58.913 net/fm10k: not in enabled drivers build config 00:01:58.913 net/gve: not in enabled drivers build config 00:01:58.913 net/hinic: not in enabled drivers build config 00:01:58.913 net/hns3: not in enabled drivers build config 00:01:58.913 net/i40e: not in enabled drivers build config 00:01:58.913 net/iavf: not in enabled drivers build config 00:01:58.913 net/ice: not in enabled drivers build config 00:01:58.913 net/idpf: not in enabled drivers build config 00:01:58.913 net/igc: not in enabled drivers build config 00:01:58.913 net/ionic: not in enabled drivers build config 00:01:58.913 net/ipn3ke: not in enabled drivers build config 00:01:58.913 net/ixgbe: not in enabled drivers build config 00:01:58.913 net/mana: not in enabled drivers build config 00:01:58.913 net/memif: not in enabled drivers build config 00:01:58.913 net/mlx4: not in enabled drivers build config 00:01:58.913 net/mlx5: not in enabled drivers build config 00:01:58.913 net/mvneta: not in enabled drivers build config 00:01:58.913 net/mvpp2: not in enabled drivers build config 00:01:58.913 net/netvsc: not in enabled drivers build config 00:01:58.913 net/nfb: not in enabled drivers build config 00:01:58.913 net/nfp: not in enabled drivers build config 00:01:58.913 net/ngbe: not in enabled drivers build config 00:01:58.913 net/null: not in enabled drivers build config 00:01:58.913 net/octeontx: not in enabled drivers build config 00:01:58.913 net/octeon_ep: not in enabled drivers build config 00:01:58.913 net/pcap: not in enabled drivers build config 00:01:58.913 net/pfe: not in enabled drivers build config 00:01:58.913 net/qede: not in enabled drivers build config 00:01:58.913 net/ring: not in enabled drivers build config 00:01:58.913 net/sfc: not in enabled drivers build config 00:01:58.913 net/softnic: not in enabled drivers build config 00:01:58.913 net/tap: not in enabled drivers build config 00:01:58.913 net/thunderx: not in enabled drivers build config 00:01:58.913 net/txgbe: not in enabled drivers build config 00:01:58.913 net/vdev_netvsc: not in enabled drivers build config 00:01:58.913 net/vhost: not in enabled drivers build config 00:01:58.913 net/virtio: not in enabled drivers build config 00:01:58.913 net/vmxnet3: not in enabled drivers build config 00:01:58.913 raw/*: missing internal dependency, "rawdev" 00:01:58.913 crypto/armv8: not in enabled drivers build config 00:01:58.913 crypto/bcmfs: not in enabled drivers build config 00:01:58.913 crypto/caam_jr: not in enabled drivers build config 00:01:58.913 crypto/ccp: not in enabled drivers build config 00:01:58.913 crypto/cnxk: not in enabled drivers build config 00:01:58.913 crypto/dpaa_sec: not in enabled drivers build config 00:01:58.913 crypto/dpaa2_sec: not in enabled drivers build config 00:01:58.913 crypto/ipsec_mb: not in enabled drivers build config 00:01:58.913 crypto/mlx5: not in enabled drivers build config 00:01:58.913 crypto/mvsam: not in enabled drivers build config 00:01:58.913 crypto/nitrox: not in enabled drivers build config 00:01:58.913 crypto/null: not in enabled drivers build config 00:01:58.913 crypto/octeontx: not in enabled drivers build config 00:01:58.913 crypto/openssl: not in enabled drivers build config 00:01:58.913 crypto/scheduler: not in enabled drivers build config 00:01:58.913 crypto/uadk: not in enabled drivers build config 00:01:58.913 crypto/virtio: not in enabled drivers build config 00:01:58.913 compress/isal: not in enabled drivers build config 00:01:58.913 compress/mlx5: not in enabled drivers build config 00:01:58.913 compress/nitrox: not in enabled drivers build config 00:01:58.914 compress/octeontx: not in enabled drivers build config 00:01:58.914 compress/zlib: not in enabled drivers build config 00:01:58.914 regex/*: missing internal dependency, "regexdev" 00:01:58.914 ml/*: missing internal dependency, "mldev" 00:01:58.914 vdpa/ifc: not in enabled drivers build config 00:01:58.914 vdpa/mlx5: not in enabled drivers build config 00:01:58.914 vdpa/nfp: not in enabled drivers build config 00:01:58.914 vdpa/sfc: not in enabled drivers build config 00:01:58.914 event/*: missing internal dependency, "eventdev" 00:01:58.914 baseband/*: missing internal dependency, "bbdev" 00:01:58.914 gpu/*: missing internal dependency, "gpudev" 00:01:58.914 00:01:58.914 00:01:59.173 Build targets in project: 85 00:01:59.173 00:01:59.173 DPDK 24.03.0 00:01:59.173 00:01:59.173 User defined options 00:01:59.173 buildtype : debug 00:01:59.173 default_library : shared 00:01:59.173 libdir : lib 00:01:59.173 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:59.173 b_sanitize : address 00:01:59.173 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:59.173 c_link_args : 00:01:59.173 cpu_instruction_set: native 00:01:59.173 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:59.173 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:59.173 enable_docs : false 00:01:59.173 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:59.173 enable_kmods : false 00:01:59.173 max_lcores : 128 00:01:59.173 tests : false 00:01:59.173 00:01:59.173 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:59.748 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:59.748 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:59.748 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:59.748 [3/268] Linking static target lib/librte_log.a 00:01:59.748 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:59.748 [5/268] Linking static target lib/librte_kvargs.a 00:01:59.748 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:00.315 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:00.316 [8/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.316 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:00.316 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:00.316 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:00.316 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:00.316 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:00.316 [14/268] Linking static target lib/librte_telemetry.a 00:02:00.316 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:00.574 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:00.574 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:00.574 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:00.834 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:00.834 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.834 [21/268] Linking target lib/librte_log.so.24.1 00:02:00.834 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:01.093 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:01.093 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:01.093 [25/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:01.093 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:01.093 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:01.093 [28/268] Linking target lib/librte_kvargs.so.24.1 00:02:01.093 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:01.352 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.352 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:01.352 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:01.352 [33/268] Linking target lib/librte_telemetry.so.24.1 00:02:01.352 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:01.352 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:01.612 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:01.612 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:01.612 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:01.612 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:01.612 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:01.871 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:01.871 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:01.871 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:01.871 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:01.871 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:01.871 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:02.131 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:02.132 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:02.132 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:02.132 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:02.391 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:02.391 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:02.391 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:02.391 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:02.391 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:02.391 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:02.391 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:02.651 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:02.651 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:02.910 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:02.910 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:02.910 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:02.910 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:02.910 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:02.910 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:03.169 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:03.169 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:03.169 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:03.429 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:03.429 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:03.429 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:03.429 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:03.429 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:03.687 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:03.687 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:03.687 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:03.947 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:03.947 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:03.947 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:03.947 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:03.947 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:03.947 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:04.206 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:04.206 [84/268] Linking static target lib/librte_ring.a 00:02:04.206 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:04.206 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:04.206 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:04.206 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:04.206 [89/268] Linking static target lib/librte_eal.a 00:02:04.466 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:04.725 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:04.725 [92/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.725 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:04.725 [94/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:04.725 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:04.725 [96/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:04.725 [97/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:04.725 [98/268] Linking static target lib/librte_rcu.a 00:02:04.725 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:04.984 [100/268] Linking static target lib/librte_mempool.a 00:02:04.984 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:04.984 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:05.242 [103/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:05.242 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:05.242 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:05.242 [106/268] Linking static target lib/librte_net.a 00:02:05.242 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:05.242 [108/268] Linking static target lib/librte_meter.a 00:02:05.242 [109/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.242 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:05.242 [111/268] Linking static target lib/librte_mbuf.a 00:02:05.500 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:05.500 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.758 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:05.758 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:05.758 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.758 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:06.017 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:06.017 [119/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.276 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:06.276 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:06.276 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.534 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:06.793 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:06.793 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:06.793 [126/268] Linking static target lib/librte_pci.a 00:02:06.793 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:06.793 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:06.793 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:07.052 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:07.052 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:07.052 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:07.052 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:07.052 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:07.052 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:07.052 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:07.052 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:07.052 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:07.312 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.312 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:07.312 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:07.312 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:07.312 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:07.312 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:07.312 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:07.573 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:07.573 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:07.573 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:07.573 [149/268] Linking static target lib/librte_cmdline.a 00:02:07.573 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:07.832 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:07.832 [152/268] Linking static target lib/librte_timer.a 00:02:07.832 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:07.832 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:08.091 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:08.091 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:08.091 [157/268] Linking static target lib/librte_ethdev.a 00:02:08.091 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:08.351 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:08.351 [160/268] Linking static target lib/librte_compressdev.a 00:02:08.351 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:08.351 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.351 [163/268] Linking static target lib/librte_hash.a 00:02:08.351 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:08.610 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:08.610 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:08.610 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:08.871 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:08.871 [169/268] Linking static target lib/librte_dmadev.a 00:02:08.871 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:08.871 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:09.131 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:09.131 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:09.131 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.131 [175/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.390 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:09.390 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:09.650 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.650 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:09.650 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:09.650 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.650 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:09.650 [183/268] Linking static target lib/librte_cryptodev.a 00:02:09.650 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:09.910 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:09.910 [186/268] Linking static target lib/librte_power.a 00:02:10.170 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:10.170 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:10.170 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:10.170 [190/268] Linking static target lib/librte_reorder.a 00:02:10.170 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:10.170 [192/268] Linking static target lib/librte_security.a 00:02:10.429 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:10.689 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:10.689 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.948 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.948 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.208 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:11.208 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:11.208 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:11.467 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:11.467 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:11.726 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:11.726 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:11.727 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:11.727 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:11.986 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:11.986 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:11.986 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:11.986 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:11.986 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.255 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:12.255 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:12.255 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:12.255 [215/268] Linking static target drivers/librte_bus_vdev.a 00:02:12.255 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:12.515 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:12.515 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:12.515 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:12.515 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:12.515 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:12.515 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.775 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:12.775 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.775 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.775 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:13.035 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.975 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:15.355 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.355 [230/268] Linking target lib/librte_eal.so.24.1 00:02:15.615 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:15.615 [232/268] Linking target lib/librte_ring.so.24.1 00:02:15.615 [233/268] Linking target lib/librte_meter.so.24.1 00:02:15.615 [234/268] Linking target lib/librte_timer.so.24.1 00:02:15.615 [235/268] Linking target lib/librte_dmadev.so.24.1 00:02:15.615 [236/268] Linking target lib/librte_pci.so.24.1 00:02:15.615 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:15.874 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:15.874 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:15.874 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:15.874 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:15.874 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:15.874 [243/268] Linking target lib/librte_mempool.so.24.1 00:02:15.874 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:15.875 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:15.875 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:15.875 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:15.875 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:15.875 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:16.134 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:16.134 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:16.134 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:16.134 [253/268] Linking target lib/librte_net.so.24.1 00:02:16.134 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:16.392 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:16.392 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:16.392 [257/268] Linking target lib/librte_security.so.24.1 00:02:16.392 [258/268] Linking target lib/librte_hash.so.24.1 00:02:16.392 [259/268] Linking target lib/librte_cmdline.so.24.1 00:02:16.652 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:16.911 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.170 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:17.170 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:17.170 [264/268] Linking target lib/librte_power.so.24.1 00:02:17.738 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:17.738 [266/268] Linking static target lib/librte_vhost.a 00:02:20.272 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.272 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:20.272 INFO: autodetecting backend as ninja 00:02:20.272 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:42.231 CC lib/log/log.o 00:02:42.231 CC lib/log/log_deprecated.o 00:02:42.231 CC lib/log/log_flags.o 00:02:42.231 CC lib/ut_mock/mock.o 00:02:42.231 CC lib/ut/ut.o 00:02:42.231 LIB libspdk_log.a 00:02:42.231 LIB libspdk_ut_mock.a 00:02:42.231 LIB libspdk_ut.a 00:02:42.231 SO libspdk_ut_mock.so.6.0 00:02:42.231 SO libspdk_log.so.7.1 00:02:42.231 SO libspdk_ut.so.2.0 00:02:42.231 SYMLINK libspdk_ut.so 00:02:42.231 SYMLINK libspdk_log.so 00:02:42.231 SYMLINK libspdk_ut_mock.so 00:02:42.231 CC lib/ioat/ioat.o 00:02:42.231 CC lib/dma/dma.o 00:02:42.231 CXX lib/trace_parser/trace.o 00:02:42.231 CC lib/util/base64.o 00:02:42.231 CC lib/util/cpuset.o 00:02:42.231 CC lib/util/bit_array.o 00:02:42.231 CC lib/util/crc32.o 00:02:42.231 CC lib/util/crc16.o 00:02:42.231 CC lib/util/crc32c.o 00:02:42.231 CC lib/vfio_user/host/vfio_user_pci.o 00:02:42.231 CC lib/vfio_user/host/vfio_user.o 00:02:42.231 CC lib/util/crc32_ieee.o 00:02:42.231 CC lib/util/crc64.o 00:02:42.231 CC lib/util/dif.o 00:02:42.231 CC lib/util/fd.o 00:02:42.231 LIB libspdk_dma.a 00:02:42.231 CC lib/util/fd_group.o 00:02:42.231 SO libspdk_dma.so.5.0 00:02:42.231 CC lib/util/file.o 00:02:42.231 LIB libspdk_ioat.a 00:02:42.231 SO libspdk_ioat.so.7.0 00:02:42.231 SYMLINK libspdk_dma.so 00:02:42.231 CC lib/util/hexlify.o 00:02:42.231 CC lib/util/iov.o 00:02:42.231 CC lib/util/math.o 00:02:42.231 LIB libspdk_vfio_user.a 00:02:42.231 CC lib/util/net.o 00:02:42.231 SO libspdk_vfio_user.so.5.0 00:02:42.231 SYMLINK libspdk_ioat.so 00:02:42.231 CC lib/util/pipe.o 00:02:42.231 CC lib/util/strerror_tls.o 00:02:42.231 SYMLINK libspdk_vfio_user.so 00:02:42.231 CC lib/util/string.o 00:02:42.231 CC lib/util/uuid.o 00:02:42.231 CC lib/util/xor.o 00:02:42.231 CC lib/util/zipf.o 00:02:42.231 CC lib/util/md5.o 00:02:42.231 LIB libspdk_util.a 00:02:42.231 SO libspdk_util.so.10.1 00:02:42.231 LIB libspdk_trace_parser.a 00:02:42.231 SO libspdk_trace_parser.so.6.0 00:02:42.231 SYMLINK libspdk_util.so 00:02:42.231 SYMLINK libspdk_trace_parser.so 00:02:42.231 CC lib/env_dpdk/env.o 00:02:42.231 CC lib/env_dpdk/memory.o 00:02:42.231 CC lib/env_dpdk/pci.o 00:02:42.231 CC lib/env_dpdk/init.o 00:02:42.231 CC lib/env_dpdk/threads.o 00:02:42.231 CC lib/json/json_parse.o 00:02:42.231 CC lib/idxd/idxd.o 00:02:42.231 CC lib/rdma_utils/rdma_utils.o 00:02:42.231 CC lib/vmd/vmd.o 00:02:42.231 CC lib/conf/conf.o 00:02:42.231 CC lib/vmd/led.o 00:02:42.231 LIB libspdk_conf.a 00:02:42.231 SO libspdk_conf.so.6.0 00:02:42.231 LIB libspdk_rdma_utils.a 00:02:42.231 CC lib/json/json_util.o 00:02:42.231 SO libspdk_rdma_utils.so.1.0 00:02:42.231 SYMLINK libspdk_conf.so 00:02:42.231 CC lib/json/json_write.o 00:02:42.231 SYMLINK libspdk_rdma_utils.so 00:02:42.231 CC lib/env_dpdk/pci_ioat.o 00:02:42.231 CC lib/env_dpdk/pci_virtio.o 00:02:42.231 CC lib/env_dpdk/pci_vmd.o 00:02:42.231 CC lib/env_dpdk/pci_idxd.o 00:02:42.231 CC lib/env_dpdk/pci_event.o 00:02:42.231 CC lib/env_dpdk/sigbus_handler.o 00:02:42.231 CC lib/env_dpdk/pci_dpdk.o 00:02:42.231 CC lib/rdma_provider/common.o 00:02:42.231 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:42.231 LIB libspdk_json.a 00:02:42.231 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:42.231 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:42.231 CC lib/idxd/idxd_user.o 00:02:42.231 SO libspdk_json.so.6.0 00:02:42.231 CC lib/idxd/idxd_kernel.o 00:02:42.231 SYMLINK libspdk_json.so 00:02:42.231 LIB libspdk_rdma_provider.a 00:02:42.231 SO libspdk_rdma_provider.so.7.0 00:02:42.231 SYMLINK libspdk_rdma_provider.so 00:02:42.231 LIB libspdk_vmd.a 00:02:42.231 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:42.231 CC lib/jsonrpc/jsonrpc_client.o 00:02:42.231 CC lib/jsonrpc/jsonrpc_server.o 00:02:42.231 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:42.231 LIB libspdk_idxd.a 00:02:42.231 SO libspdk_vmd.so.6.0 00:02:42.490 SO libspdk_idxd.so.12.1 00:02:42.490 SYMLINK libspdk_vmd.so 00:02:42.490 SYMLINK libspdk_idxd.so 00:02:42.490 LIB libspdk_jsonrpc.a 00:02:42.747 SO libspdk_jsonrpc.so.6.0 00:02:42.748 SYMLINK libspdk_jsonrpc.so 00:02:43.006 CC lib/rpc/rpc.o 00:02:43.006 LIB libspdk_env_dpdk.a 00:02:43.265 SO libspdk_env_dpdk.so.15.1 00:02:43.265 LIB libspdk_rpc.a 00:02:43.265 SO libspdk_rpc.so.6.0 00:02:43.265 SYMLINK libspdk_env_dpdk.so 00:02:43.524 SYMLINK libspdk_rpc.so 00:02:43.784 CC lib/trace/trace.o 00:02:43.784 CC lib/trace/trace_flags.o 00:02:43.784 CC lib/trace/trace_rpc.o 00:02:43.784 CC lib/keyring/keyring.o 00:02:43.784 CC lib/keyring/keyring_rpc.o 00:02:43.784 CC lib/notify/notify_rpc.o 00:02:43.784 CC lib/notify/notify.o 00:02:44.057 LIB libspdk_notify.a 00:02:44.057 SO libspdk_notify.so.6.0 00:02:44.057 LIB libspdk_keyring.a 00:02:44.057 LIB libspdk_trace.a 00:02:44.057 SYMLINK libspdk_notify.so 00:02:44.057 SO libspdk_keyring.so.2.0 00:02:44.057 SO libspdk_trace.so.11.0 00:02:44.057 SYMLINK libspdk_keyring.so 00:02:44.315 SYMLINK libspdk_trace.so 00:02:44.574 CC lib/thread/thread.o 00:02:44.574 CC lib/thread/iobuf.o 00:02:44.574 CC lib/sock/sock.o 00:02:44.574 CC lib/sock/sock_rpc.o 00:02:45.140 LIB libspdk_sock.a 00:02:45.140 SO libspdk_sock.so.10.0 00:02:45.140 SYMLINK libspdk_sock.so 00:02:45.707 CC lib/nvme/nvme_fabric.o 00:02:45.707 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:45.707 CC lib/nvme/nvme_ctrlr.o 00:02:45.707 CC lib/nvme/nvme_ns_cmd.o 00:02:45.707 CC lib/nvme/nvme_pcie_common.o 00:02:45.707 CC lib/nvme/nvme_ns.o 00:02:45.707 CC lib/nvme/nvme_pcie.o 00:02:45.707 CC lib/nvme/nvme_qpair.o 00:02:45.707 CC lib/nvme/nvme.o 00:02:46.273 CC lib/nvme/nvme_quirks.o 00:02:46.273 CC lib/nvme/nvme_transport.o 00:02:46.273 CC lib/nvme/nvme_discovery.o 00:02:46.273 LIB libspdk_thread.a 00:02:46.273 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:46.273 SO libspdk_thread.so.11.0 00:02:46.531 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:46.531 SYMLINK libspdk_thread.so 00:02:46.531 CC lib/nvme/nvme_tcp.o 00:02:46.531 CC lib/nvme/nvme_opal.o 00:02:46.531 CC lib/nvme/nvme_io_msg.o 00:02:46.789 CC lib/nvme/nvme_poll_group.o 00:02:46.789 CC lib/nvme/nvme_zns.o 00:02:46.789 CC lib/nvme/nvme_stubs.o 00:02:47.049 CC lib/nvme/nvme_auth.o 00:02:47.049 CC lib/accel/accel.o 00:02:47.308 CC lib/nvme/nvme_cuse.o 00:02:47.308 CC lib/blob/blobstore.o 00:02:47.308 CC lib/init/json_config.o 00:02:47.566 CC lib/accel/accel_rpc.o 00:02:47.566 CC lib/accel/accel_sw.o 00:02:47.566 CC lib/blob/request.o 00:02:47.826 CC lib/blob/zeroes.o 00:02:47.826 CC lib/init/subsystem.o 00:02:47.826 CC lib/init/subsystem_rpc.o 00:02:47.826 CC lib/init/rpc.o 00:02:48.085 CC lib/blob/blob_bs_dev.o 00:02:48.085 LIB libspdk_init.a 00:02:48.085 CC lib/nvme/nvme_rdma.o 00:02:48.085 SO libspdk_init.so.6.0 00:02:48.343 SYMLINK libspdk_init.so 00:02:48.343 CC lib/virtio/virtio.o 00:02:48.343 CC lib/virtio/virtio_vhost_user.o 00:02:48.343 CC lib/virtio/virtio_vfio_user.o 00:02:48.343 CC lib/virtio/virtio_pci.o 00:02:48.344 CC lib/fsdev/fsdev.o 00:02:48.344 CC lib/fsdev/fsdev_io.o 00:02:48.603 LIB libspdk_accel.a 00:02:48.603 CC lib/event/app.o 00:02:48.603 SO libspdk_accel.so.16.0 00:02:48.603 CC lib/event/reactor.o 00:02:48.603 SYMLINK libspdk_accel.so 00:02:48.603 CC lib/event/log_rpc.o 00:02:48.603 CC lib/event/app_rpc.o 00:02:48.603 CC lib/fsdev/fsdev_rpc.o 00:02:48.603 LIB libspdk_virtio.a 00:02:48.603 SO libspdk_virtio.so.7.0 00:02:48.862 SYMLINK libspdk_virtio.so 00:02:48.862 CC lib/event/scheduler_static.o 00:02:49.121 CC lib/bdev/bdev.o 00:02:49.121 CC lib/bdev/bdev_rpc.o 00:02:49.121 CC lib/bdev/part.o 00:02:49.121 CC lib/bdev/scsi_nvme.o 00:02:49.121 CC lib/bdev/bdev_zone.o 00:02:49.121 LIB libspdk_fsdev.a 00:02:49.121 SO libspdk_fsdev.so.2.0 00:02:49.121 LIB libspdk_event.a 00:02:49.121 SO libspdk_event.so.14.0 00:02:49.122 SYMLINK libspdk_fsdev.so 00:02:49.122 SYMLINK libspdk_event.so 00:02:49.380 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:49.639 LIB libspdk_nvme.a 00:02:49.897 SO libspdk_nvme.so.15.0 00:02:50.156 SYMLINK libspdk_nvme.so 00:02:50.156 LIB libspdk_fuse_dispatcher.a 00:02:50.156 SO libspdk_fuse_dispatcher.so.1.0 00:02:50.415 SYMLINK libspdk_fuse_dispatcher.so 00:02:51.352 LIB libspdk_blob.a 00:02:51.352 SO libspdk_blob.so.11.0 00:02:51.352 SYMLINK libspdk_blob.so 00:02:51.937 CC lib/lvol/lvol.o 00:02:51.937 CC lib/blobfs/tree.o 00:02:51.937 CC lib/blobfs/blobfs.o 00:02:52.195 LIB libspdk_bdev.a 00:02:52.195 SO libspdk_bdev.so.17.0 00:02:52.455 SYMLINK libspdk_bdev.so 00:02:52.713 CC lib/scsi/dev.o 00:02:52.713 CC lib/scsi/lun.o 00:02:52.713 CC lib/scsi/port.o 00:02:52.713 CC lib/scsi/scsi.o 00:02:52.713 CC lib/ftl/ftl_core.o 00:02:52.713 CC lib/ublk/ublk.o 00:02:52.713 CC lib/nvmf/ctrlr.o 00:02:52.713 CC lib/nbd/nbd.o 00:02:52.973 CC lib/nbd/nbd_rpc.o 00:02:52.973 CC lib/nvmf/ctrlr_discovery.o 00:02:52.973 LIB libspdk_blobfs.a 00:02:52.973 SO libspdk_blobfs.so.10.0 00:02:52.973 CC lib/ublk/ublk_rpc.o 00:02:52.973 SYMLINK libspdk_blobfs.so 00:02:52.973 CC lib/nvmf/ctrlr_bdev.o 00:02:52.973 CC lib/scsi/scsi_bdev.o 00:02:52.973 CC lib/nvmf/subsystem.o 00:02:52.973 LIB libspdk_lvol.a 00:02:53.232 SO libspdk_lvol.so.10.0 00:02:53.232 CC lib/ftl/ftl_init.o 00:02:53.232 CC lib/ftl/ftl_layout.o 00:02:53.232 SYMLINK libspdk_lvol.so 00:02:53.232 CC lib/ftl/ftl_debug.o 00:02:53.232 LIB libspdk_nbd.a 00:02:53.232 SO libspdk_nbd.so.7.0 00:02:53.232 SYMLINK libspdk_nbd.so 00:02:53.232 CC lib/ftl/ftl_io.o 00:02:53.492 CC lib/scsi/scsi_pr.o 00:02:53.492 LIB libspdk_ublk.a 00:02:53.492 CC lib/ftl/ftl_sb.o 00:02:53.492 SO libspdk_ublk.so.3.0 00:02:53.492 CC lib/scsi/scsi_rpc.o 00:02:53.492 SYMLINK libspdk_ublk.so 00:02:53.492 CC lib/scsi/task.o 00:02:53.492 CC lib/ftl/ftl_l2p.o 00:02:53.492 CC lib/ftl/ftl_l2p_flat.o 00:02:53.751 CC lib/ftl/ftl_nv_cache.o 00:02:53.751 CC lib/ftl/ftl_band.o 00:02:53.751 CC lib/nvmf/nvmf.o 00:02:53.751 CC lib/nvmf/nvmf_rpc.o 00:02:53.751 CC lib/nvmf/transport.o 00:02:53.751 CC lib/ftl/ftl_band_ops.o 00:02:54.011 LIB libspdk_scsi.a 00:02:54.011 CC lib/nvmf/tcp.o 00:02:54.011 SO libspdk_scsi.so.9.0 00:02:54.011 SYMLINK libspdk_scsi.so 00:02:54.011 CC lib/nvmf/stubs.o 00:02:54.270 CC lib/nvmf/mdns_server.o 00:02:54.529 CC lib/iscsi/conn.o 00:02:54.529 CC lib/nvmf/rdma.o 00:02:54.820 CC lib/nvmf/auth.o 00:02:54.820 CC lib/iscsi/init_grp.o 00:02:54.820 CC lib/ftl/ftl_writer.o 00:02:55.078 CC lib/iscsi/iscsi.o 00:02:55.078 CC lib/vhost/vhost.o 00:02:55.078 CC lib/vhost/vhost_rpc.o 00:02:55.078 CC lib/iscsi/param.o 00:02:55.078 CC lib/iscsi/portal_grp.o 00:02:55.078 CC lib/ftl/ftl_rq.o 00:02:55.337 CC lib/ftl/ftl_reloc.o 00:02:55.337 CC lib/iscsi/tgt_node.o 00:02:55.596 CC lib/ftl/ftl_l2p_cache.o 00:02:55.596 CC lib/ftl/ftl_p2l.o 00:02:55.596 CC lib/ftl/ftl_p2l_log.o 00:02:55.853 CC lib/ftl/mngt/ftl_mngt.o 00:02:55.853 CC lib/vhost/vhost_scsi.o 00:02:55.853 CC lib/iscsi/iscsi_subsystem.o 00:02:55.853 CC lib/iscsi/iscsi_rpc.o 00:02:55.853 CC lib/iscsi/task.o 00:02:56.111 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:56.111 CC lib/vhost/vhost_blk.o 00:02:56.111 CC lib/vhost/rte_vhost_user.o 00:02:56.111 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:56.111 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:56.369 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:56.370 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:56.370 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:56.370 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:56.370 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:56.628 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:56.628 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:56.628 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:56.628 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:56.628 LIB libspdk_iscsi.a 00:02:56.886 CC lib/ftl/utils/ftl_conf.o 00:02:56.886 CC lib/ftl/utils/ftl_md.o 00:02:56.886 SO libspdk_iscsi.so.8.0 00:02:56.886 CC lib/ftl/utils/ftl_mempool.o 00:02:56.886 CC lib/ftl/utils/ftl_bitmap.o 00:02:56.886 CC lib/ftl/utils/ftl_property.o 00:02:56.886 SYMLINK libspdk_iscsi.so 00:02:56.886 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:56.886 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:56.886 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:57.143 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:57.143 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:57.143 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:57.143 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:57.143 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:57.143 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:57.143 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:57.401 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:57.401 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:57.401 LIB libspdk_nvmf.a 00:02:57.401 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:57.401 CC lib/ftl/base/ftl_base_dev.o 00:02:57.401 LIB libspdk_vhost.a 00:02:57.401 CC lib/ftl/base/ftl_base_bdev.o 00:02:57.401 CC lib/ftl/ftl_trace.o 00:02:57.401 SO libspdk_vhost.so.8.0 00:02:57.401 SO libspdk_nvmf.so.20.0 00:02:57.659 SYMLINK libspdk_vhost.so 00:02:57.659 LIB libspdk_ftl.a 00:02:57.659 SYMLINK libspdk_nvmf.so 00:02:57.917 SO libspdk_ftl.so.9.0 00:02:58.174 SYMLINK libspdk_ftl.so 00:02:58.740 CC module/env_dpdk/env_dpdk_rpc.o 00:02:58.740 CC module/blob/bdev/blob_bdev.o 00:02:58.740 CC module/keyring/linux/keyring.o 00:02:58.740 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:58.740 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:58.740 CC module/keyring/file/keyring.o 00:02:58.740 CC module/fsdev/aio/fsdev_aio.o 00:02:58.740 CC module/scheduler/gscheduler/gscheduler.o 00:02:58.740 CC module/sock/posix/posix.o 00:02:58.740 CC module/accel/error/accel_error.o 00:02:58.740 LIB libspdk_env_dpdk_rpc.a 00:02:58.740 SO libspdk_env_dpdk_rpc.so.6.0 00:02:58.740 CC module/keyring/linux/keyring_rpc.o 00:02:58.740 CC module/keyring/file/keyring_rpc.o 00:02:58.740 SYMLINK libspdk_env_dpdk_rpc.so 00:02:58.740 LIB libspdk_scheduler_dpdk_governor.a 00:02:58.740 CC module/accel/error/accel_error_rpc.o 00:02:58.740 LIB libspdk_scheduler_gscheduler.a 00:02:58.998 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:58.999 SO libspdk_scheduler_gscheduler.so.4.0 00:02:58.999 LIB libspdk_scheduler_dynamic.a 00:02:58.999 SO libspdk_scheduler_dynamic.so.4.0 00:02:58.999 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:58.999 SYMLINK libspdk_scheduler_gscheduler.so 00:02:58.999 LIB libspdk_keyring_linux.a 00:02:58.999 SYMLINK libspdk_scheduler_dynamic.so 00:02:58.999 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:58.999 LIB libspdk_blob_bdev.a 00:02:58.999 SO libspdk_keyring_linux.so.1.0 00:02:58.999 LIB libspdk_keyring_file.a 00:02:58.999 SO libspdk_blob_bdev.so.11.0 00:02:58.999 LIB libspdk_accel_error.a 00:02:58.999 SO libspdk_keyring_file.so.2.0 00:02:58.999 SO libspdk_accel_error.so.2.0 00:02:58.999 SYMLINK libspdk_keyring_linux.so 00:02:58.999 SYMLINK libspdk_blob_bdev.so 00:02:58.999 CC module/fsdev/aio/linux_aio_mgr.o 00:02:58.999 SYMLINK libspdk_keyring_file.so 00:02:58.999 SYMLINK libspdk_accel_error.so 00:02:58.999 CC module/accel/dsa/accel_dsa.o 00:02:58.999 CC module/accel/dsa/accel_dsa_rpc.o 00:02:59.258 CC module/accel/ioat/accel_ioat.o 00:02:59.258 CC module/accel/iaa/accel_iaa.o 00:02:59.258 CC module/accel/iaa/accel_iaa_rpc.o 00:02:59.258 CC module/accel/ioat/accel_ioat_rpc.o 00:02:59.258 CC module/blobfs/bdev/blobfs_bdev.o 00:02:59.258 CC module/bdev/delay/vbdev_delay.o 00:02:59.258 LIB libspdk_accel_iaa.a 00:02:59.517 SO libspdk_accel_iaa.so.3.0 00:02:59.517 LIB libspdk_accel_ioat.a 00:02:59.517 LIB libspdk_accel_dsa.a 00:02:59.517 SO libspdk_accel_ioat.so.6.0 00:02:59.517 CC module/bdev/gpt/gpt.o 00:02:59.517 CC module/bdev/error/vbdev_error.o 00:02:59.517 SO libspdk_accel_dsa.so.5.0 00:02:59.517 SYMLINK libspdk_accel_iaa.so 00:02:59.517 CC module/bdev/lvol/vbdev_lvol.o 00:02:59.517 CC module/bdev/error/vbdev_error_rpc.o 00:02:59.517 LIB libspdk_fsdev_aio.a 00:02:59.517 SYMLINK libspdk_accel_ioat.so 00:02:59.517 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:59.517 SO libspdk_fsdev_aio.so.1.0 00:02:59.517 SYMLINK libspdk_accel_dsa.so 00:02:59.517 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:59.778 LIB libspdk_sock_posix.a 00:02:59.778 SYMLINK libspdk_fsdev_aio.so 00:02:59.778 CC module/bdev/gpt/vbdev_gpt.o 00:02:59.778 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:59.778 SO libspdk_sock_posix.so.6.0 00:02:59.778 CC module/bdev/malloc/bdev_malloc.o 00:02:59.778 LIB libspdk_bdev_error.a 00:02:59.778 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:59.778 LIB libspdk_blobfs_bdev.a 00:02:59.778 SO libspdk_bdev_error.so.6.0 00:02:59.778 SO libspdk_blobfs_bdev.so.6.0 00:02:59.778 SYMLINK libspdk_sock_posix.so 00:02:59.778 LIB libspdk_bdev_delay.a 00:02:59.778 CC module/bdev/null/bdev_null.o 00:02:59.778 SYMLINK libspdk_bdev_error.so 00:03:00.038 SO libspdk_bdev_delay.so.6.0 00:03:00.038 SYMLINK libspdk_blobfs_bdev.so 00:03:00.038 SYMLINK libspdk_bdev_delay.so 00:03:00.038 CC module/bdev/null/bdev_null_rpc.o 00:03:00.038 LIB libspdk_bdev_gpt.a 00:03:00.038 CC module/bdev/nvme/bdev_nvme.o 00:03:00.038 SO libspdk_bdev_gpt.so.6.0 00:03:00.038 CC module/bdev/passthru/vbdev_passthru.o 00:03:00.038 CC module/bdev/raid/bdev_raid.o 00:03:00.038 SYMLINK libspdk_bdev_gpt.so 00:03:00.038 CC module/bdev/raid/bdev_raid_rpc.o 00:03:00.038 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:00.296 CC module/bdev/split/vbdev_split.o 00:03:00.296 CC module/bdev/split/vbdev_split_rpc.o 00:03:00.296 LIB libspdk_bdev_null.a 00:03:00.296 LIB libspdk_bdev_lvol.a 00:03:00.296 SO libspdk_bdev_null.so.6.0 00:03:00.296 SO libspdk_bdev_lvol.so.6.0 00:03:00.296 LIB libspdk_bdev_malloc.a 00:03:00.296 SYMLINK libspdk_bdev_null.so 00:03:00.296 SO libspdk_bdev_malloc.so.6.0 00:03:00.296 CC module/bdev/nvme/nvme_rpc.o 00:03:00.296 SYMLINK libspdk_bdev_lvol.so 00:03:00.296 CC module/bdev/nvme/bdev_mdns_client.o 00:03:00.296 SYMLINK libspdk_bdev_malloc.so 00:03:00.296 CC module/bdev/nvme/vbdev_opal.o 00:03:00.296 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:00.553 CC module/bdev/raid/bdev_raid_sb.o 00:03:00.554 CC module/bdev/raid/raid0.o 00:03:00.554 LIB libspdk_bdev_split.a 00:03:00.554 SO libspdk_bdev_split.so.6.0 00:03:00.554 SYMLINK libspdk_bdev_split.so 00:03:00.554 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:00.554 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:00.881 LIB libspdk_bdev_passthru.a 00:03:00.881 CC module/bdev/raid/raid1.o 00:03:00.881 SO libspdk_bdev_passthru.so.6.0 00:03:00.881 CC module/bdev/raid/concat.o 00:03:00.881 SYMLINK libspdk_bdev_passthru.so 00:03:00.881 CC module/bdev/raid/raid5f.o 00:03:01.139 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:01.139 CC module/bdev/aio/bdev_aio.o 00:03:01.139 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:01.139 CC module/bdev/ftl/bdev_ftl.o 00:03:01.139 CC module/bdev/iscsi/bdev_iscsi.o 00:03:01.139 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:01.397 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:01.397 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:01.397 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:01.655 LIB libspdk_bdev_zone_block.a 00:03:01.655 LIB libspdk_bdev_ftl.a 00:03:01.655 CC module/bdev/aio/bdev_aio_rpc.o 00:03:01.655 SO libspdk_bdev_zone_block.so.6.0 00:03:01.655 SO libspdk_bdev_ftl.so.6.0 00:03:01.655 SYMLINK libspdk_bdev_zone_block.so 00:03:01.655 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:01.655 LIB libspdk_bdev_raid.a 00:03:01.655 SYMLINK libspdk_bdev_ftl.so 00:03:01.655 SO libspdk_bdev_raid.so.6.0 00:03:01.912 LIB libspdk_bdev_aio.a 00:03:01.912 SO libspdk_bdev_aio.so.6.0 00:03:01.912 SYMLINK libspdk_bdev_raid.so 00:03:01.912 LIB libspdk_bdev_iscsi.a 00:03:01.912 SYMLINK libspdk_bdev_aio.so 00:03:01.912 SO libspdk_bdev_iscsi.so.6.0 00:03:01.912 SYMLINK libspdk_bdev_iscsi.so 00:03:02.170 LIB libspdk_bdev_virtio.a 00:03:02.170 SO libspdk_bdev_virtio.so.6.0 00:03:02.170 SYMLINK libspdk_bdev_virtio.so 00:03:04.087 LIB libspdk_bdev_nvme.a 00:03:04.087 SO libspdk_bdev_nvme.so.7.1 00:03:04.405 SYMLINK libspdk_bdev_nvme.so 00:03:04.972 CC module/event/subsystems/vmd/vmd.o 00:03:04.972 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:04.972 CC module/event/subsystems/keyring/keyring.o 00:03:04.972 CC module/event/subsystems/sock/sock.o 00:03:04.972 CC module/event/subsystems/scheduler/scheduler.o 00:03:04.972 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:04.972 CC module/event/subsystems/iobuf/iobuf.o 00:03:04.972 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:04.972 CC module/event/subsystems/fsdev/fsdev.o 00:03:04.972 LIB libspdk_event_keyring.a 00:03:04.972 LIB libspdk_event_sock.a 00:03:04.972 LIB libspdk_event_fsdev.a 00:03:04.972 SO libspdk_event_keyring.so.1.0 00:03:04.972 LIB libspdk_event_scheduler.a 00:03:04.972 SO libspdk_event_sock.so.5.0 00:03:04.972 SO libspdk_event_fsdev.so.1.0 00:03:04.972 LIB libspdk_event_vmd.a 00:03:04.972 SO libspdk_event_scheduler.so.4.0 00:03:04.972 LIB libspdk_event_vhost_blk.a 00:03:04.972 LIB libspdk_event_iobuf.a 00:03:04.972 SYMLINK libspdk_event_keyring.so 00:03:05.230 SO libspdk_event_vmd.so.6.0 00:03:05.230 SYMLINK libspdk_event_sock.so 00:03:05.230 SO libspdk_event_vhost_blk.so.3.0 00:03:05.230 SYMLINK libspdk_event_fsdev.so 00:03:05.230 SO libspdk_event_iobuf.so.3.0 00:03:05.230 SYMLINK libspdk_event_scheduler.so 00:03:05.230 SYMLINK libspdk_event_vmd.so 00:03:05.230 SYMLINK libspdk_event_vhost_blk.so 00:03:05.230 SYMLINK libspdk_event_iobuf.so 00:03:05.490 CC module/event/subsystems/accel/accel.o 00:03:05.748 LIB libspdk_event_accel.a 00:03:05.748 SO libspdk_event_accel.so.6.0 00:03:05.748 SYMLINK libspdk_event_accel.so 00:03:06.317 CC module/event/subsystems/bdev/bdev.o 00:03:06.317 LIB libspdk_event_bdev.a 00:03:06.577 SO libspdk_event_bdev.so.6.0 00:03:06.577 SYMLINK libspdk_event_bdev.so 00:03:06.837 CC module/event/subsystems/nbd/nbd.o 00:03:06.837 CC module/event/subsystems/ublk/ublk.o 00:03:06.837 CC module/event/subsystems/scsi/scsi.o 00:03:06.837 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:06.837 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:07.097 LIB libspdk_event_nbd.a 00:03:07.097 LIB libspdk_event_ublk.a 00:03:07.097 SO libspdk_event_nbd.so.6.0 00:03:07.097 SO libspdk_event_ublk.so.3.0 00:03:07.097 LIB libspdk_event_scsi.a 00:03:07.097 SYMLINK libspdk_event_nbd.so 00:03:07.097 SYMLINK libspdk_event_ublk.so 00:03:07.097 SO libspdk_event_scsi.so.6.0 00:03:07.097 SYMLINK libspdk_event_scsi.so 00:03:07.097 LIB libspdk_event_nvmf.a 00:03:07.357 SO libspdk_event_nvmf.so.6.0 00:03:07.357 SYMLINK libspdk_event_nvmf.so 00:03:07.617 CC module/event/subsystems/iscsi/iscsi.o 00:03:07.617 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:07.931 LIB libspdk_event_vhost_scsi.a 00:03:07.931 LIB libspdk_event_iscsi.a 00:03:07.931 SO libspdk_event_vhost_scsi.so.3.0 00:03:07.931 SO libspdk_event_iscsi.so.6.0 00:03:07.931 SYMLINK libspdk_event_vhost_scsi.so 00:03:07.931 SYMLINK libspdk_event_iscsi.so 00:03:08.190 SO libspdk.so.6.0 00:03:08.190 SYMLINK libspdk.so 00:03:08.450 CC test/rpc_client/rpc_client_test.o 00:03:08.450 TEST_HEADER include/spdk/accel.h 00:03:08.450 TEST_HEADER include/spdk/accel_module.h 00:03:08.450 TEST_HEADER include/spdk/assert.h 00:03:08.450 CXX app/trace/trace.o 00:03:08.450 TEST_HEADER include/spdk/barrier.h 00:03:08.450 TEST_HEADER include/spdk/base64.h 00:03:08.450 TEST_HEADER include/spdk/bdev.h 00:03:08.450 TEST_HEADER include/spdk/bdev_module.h 00:03:08.450 TEST_HEADER include/spdk/bdev_zone.h 00:03:08.450 TEST_HEADER include/spdk/bit_array.h 00:03:08.450 TEST_HEADER include/spdk/bit_pool.h 00:03:08.450 TEST_HEADER include/spdk/blob_bdev.h 00:03:08.450 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:08.450 TEST_HEADER include/spdk/blobfs.h 00:03:08.450 TEST_HEADER include/spdk/blob.h 00:03:08.450 TEST_HEADER include/spdk/conf.h 00:03:08.450 TEST_HEADER include/spdk/config.h 00:03:08.450 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:08.450 TEST_HEADER include/spdk/cpuset.h 00:03:08.450 TEST_HEADER include/spdk/crc16.h 00:03:08.450 TEST_HEADER include/spdk/crc32.h 00:03:08.450 TEST_HEADER include/spdk/crc64.h 00:03:08.450 TEST_HEADER include/spdk/dif.h 00:03:08.450 TEST_HEADER include/spdk/dma.h 00:03:08.450 TEST_HEADER include/spdk/endian.h 00:03:08.450 TEST_HEADER include/spdk/env_dpdk.h 00:03:08.450 TEST_HEADER include/spdk/env.h 00:03:08.450 TEST_HEADER include/spdk/event.h 00:03:08.450 TEST_HEADER include/spdk/fd_group.h 00:03:08.450 TEST_HEADER include/spdk/fd.h 00:03:08.450 TEST_HEADER include/spdk/file.h 00:03:08.450 TEST_HEADER include/spdk/fsdev.h 00:03:08.450 TEST_HEADER include/spdk/fsdev_module.h 00:03:08.450 CC examples/ioat/perf/perf.o 00:03:08.450 TEST_HEADER include/spdk/ftl.h 00:03:08.450 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:08.450 TEST_HEADER include/spdk/gpt_spec.h 00:03:08.450 TEST_HEADER include/spdk/hexlify.h 00:03:08.450 TEST_HEADER include/spdk/histogram_data.h 00:03:08.450 TEST_HEADER include/spdk/idxd.h 00:03:08.450 TEST_HEADER include/spdk/idxd_spec.h 00:03:08.450 TEST_HEADER include/spdk/init.h 00:03:08.450 TEST_HEADER include/spdk/ioat.h 00:03:08.450 CC test/thread/poller_perf/poller_perf.o 00:03:08.450 TEST_HEADER include/spdk/ioat_spec.h 00:03:08.450 CC examples/util/zipf/zipf.o 00:03:08.450 TEST_HEADER include/spdk/iscsi_spec.h 00:03:08.450 TEST_HEADER include/spdk/json.h 00:03:08.450 TEST_HEADER include/spdk/jsonrpc.h 00:03:08.450 TEST_HEADER include/spdk/keyring.h 00:03:08.450 TEST_HEADER include/spdk/keyring_module.h 00:03:08.450 TEST_HEADER include/spdk/likely.h 00:03:08.450 TEST_HEADER include/spdk/log.h 00:03:08.450 TEST_HEADER include/spdk/lvol.h 00:03:08.450 TEST_HEADER include/spdk/md5.h 00:03:08.450 TEST_HEADER include/spdk/memory.h 00:03:08.450 TEST_HEADER include/spdk/mmio.h 00:03:08.450 TEST_HEADER include/spdk/nbd.h 00:03:08.450 TEST_HEADER include/spdk/net.h 00:03:08.450 TEST_HEADER include/spdk/notify.h 00:03:08.450 CC test/app/bdev_svc/bdev_svc.o 00:03:08.450 TEST_HEADER include/spdk/nvme.h 00:03:08.450 TEST_HEADER include/spdk/nvme_intel.h 00:03:08.450 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:08.450 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:08.450 TEST_HEADER include/spdk/nvme_spec.h 00:03:08.450 TEST_HEADER include/spdk/nvme_zns.h 00:03:08.450 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:08.450 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:08.450 TEST_HEADER include/spdk/nvmf.h 00:03:08.450 TEST_HEADER include/spdk/nvmf_spec.h 00:03:08.450 TEST_HEADER include/spdk/nvmf_transport.h 00:03:08.450 CC test/dma/test_dma/test_dma.o 00:03:08.450 TEST_HEADER include/spdk/opal.h 00:03:08.450 TEST_HEADER include/spdk/opal_spec.h 00:03:08.450 TEST_HEADER include/spdk/pci_ids.h 00:03:08.450 TEST_HEADER include/spdk/pipe.h 00:03:08.450 TEST_HEADER include/spdk/queue.h 00:03:08.450 TEST_HEADER include/spdk/reduce.h 00:03:08.450 TEST_HEADER include/spdk/rpc.h 00:03:08.450 CC test/env/mem_callbacks/mem_callbacks.o 00:03:08.450 TEST_HEADER include/spdk/scheduler.h 00:03:08.450 TEST_HEADER include/spdk/scsi.h 00:03:08.450 TEST_HEADER include/spdk/scsi_spec.h 00:03:08.710 TEST_HEADER include/spdk/sock.h 00:03:08.710 TEST_HEADER include/spdk/stdinc.h 00:03:08.710 TEST_HEADER include/spdk/string.h 00:03:08.710 TEST_HEADER include/spdk/thread.h 00:03:08.710 TEST_HEADER include/spdk/trace.h 00:03:08.710 TEST_HEADER include/spdk/trace_parser.h 00:03:08.710 TEST_HEADER include/spdk/tree.h 00:03:08.710 TEST_HEADER include/spdk/ublk.h 00:03:08.710 TEST_HEADER include/spdk/util.h 00:03:08.710 TEST_HEADER include/spdk/uuid.h 00:03:08.710 TEST_HEADER include/spdk/version.h 00:03:08.710 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:08.710 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:08.710 TEST_HEADER include/spdk/vhost.h 00:03:08.710 TEST_HEADER include/spdk/vmd.h 00:03:08.710 TEST_HEADER include/spdk/xor.h 00:03:08.710 TEST_HEADER include/spdk/zipf.h 00:03:08.710 LINK rpc_client_test 00:03:08.710 CXX test/cpp_headers/accel.o 00:03:08.710 LINK interrupt_tgt 00:03:08.710 LINK poller_perf 00:03:08.710 LINK zipf 00:03:08.710 LINK bdev_svc 00:03:08.710 LINK ioat_perf 00:03:08.710 CXX test/cpp_headers/accel_module.o 00:03:08.969 CXX test/cpp_headers/assert.o 00:03:08.969 CXX test/cpp_headers/barrier.o 00:03:08.969 LINK spdk_trace 00:03:08.969 CXX test/cpp_headers/base64.o 00:03:08.969 CC examples/ioat/verify/verify.o 00:03:08.969 CXX test/cpp_headers/bdev.o 00:03:08.969 CC examples/thread/thread/thread_ex.o 00:03:09.228 CC app/trace_record/trace_record.o 00:03:09.228 CC test/event/event_perf/event_perf.o 00:03:09.228 LINK test_dma 00:03:09.228 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:09.228 LINK mem_callbacks 00:03:09.228 CC app/nvmf_tgt/nvmf_main.o 00:03:09.228 CC test/app/histogram_perf/histogram_perf.o 00:03:09.228 CXX test/cpp_headers/bdev_module.o 00:03:09.228 LINK verify 00:03:09.228 LINK event_perf 00:03:09.487 LINK thread 00:03:09.487 LINK histogram_perf 00:03:09.487 LINK nvmf_tgt 00:03:09.487 CC test/env/vtophys/vtophys.o 00:03:09.487 LINK spdk_trace_record 00:03:09.487 CXX test/cpp_headers/bdev_zone.o 00:03:09.487 CC test/app/jsoncat/jsoncat.o 00:03:09.487 CXX test/cpp_headers/bit_array.o 00:03:09.487 CC test/app/stub/stub.o 00:03:09.487 CC test/event/reactor/reactor.o 00:03:09.487 LINK vtophys 00:03:09.745 LINK nvme_fuzz 00:03:09.745 LINK jsoncat 00:03:09.745 CXX test/cpp_headers/bit_pool.o 00:03:09.745 LINK reactor 00:03:09.745 CC app/iscsi_tgt/iscsi_tgt.o 00:03:09.745 LINK stub 00:03:09.745 CC app/spdk_lspci/spdk_lspci.o 00:03:09.745 CC examples/sock/hello_world/hello_sock.o 00:03:09.745 CC app/spdk_tgt/spdk_tgt.o 00:03:09.745 CXX test/cpp_headers/blob_bdev.o 00:03:10.005 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:10.005 LINK spdk_lspci 00:03:10.005 CC test/event/reactor_perf/reactor_perf.o 00:03:10.005 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:10.005 LINK iscsi_tgt 00:03:10.005 CXX test/cpp_headers/blobfs_bdev.o 00:03:10.005 LINK spdk_tgt 00:03:10.005 LINK hello_sock 00:03:10.263 CXX test/cpp_headers/blobfs.o 00:03:10.263 CC test/accel/dif/dif.o 00:03:10.263 LINK env_dpdk_post_init 00:03:10.263 LINK reactor_perf 00:03:10.263 CC test/blobfs/mkfs/mkfs.o 00:03:10.263 CXX test/cpp_headers/blob.o 00:03:10.263 CXX test/cpp_headers/conf.o 00:03:10.521 LINK mkfs 00:03:10.521 CC examples/vmd/lsvmd/lsvmd.o 00:03:10.521 CC app/spdk_nvme_perf/perf.o 00:03:10.521 CC examples/vmd/led/led.o 00:03:10.521 CC test/env/memory/memory_ut.o 00:03:10.521 CC test/event/app_repeat/app_repeat.o 00:03:10.521 CXX test/cpp_headers/config.o 00:03:10.521 CC test/env/pci/pci_ut.o 00:03:10.521 CXX test/cpp_headers/cpuset.o 00:03:10.521 LINK lsvmd 00:03:10.521 LINK led 00:03:10.521 LINK app_repeat 00:03:10.779 CXX test/cpp_headers/crc16.o 00:03:10.779 CC examples/idxd/perf/perf.o 00:03:10.779 CXX test/cpp_headers/crc32.o 00:03:10.779 CC test/event/scheduler/scheduler.o 00:03:11.037 CC examples/accel/perf/accel_perf.o 00:03:11.037 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:11.037 LINK pci_ut 00:03:11.037 CXX test/cpp_headers/crc64.o 00:03:11.037 LINK dif 00:03:11.037 LINK scheduler 00:03:11.349 CXX test/cpp_headers/dif.o 00:03:11.349 LINK idxd_perf 00:03:11.349 LINK hello_fsdev 00:03:11.349 CXX test/cpp_headers/dma.o 00:03:11.349 CC app/spdk_nvme_identify/identify.o 00:03:11.349 CXX test/cpp_headers/endian.o 00:03:11.349 CXX test/cpp_headers/env_dpdk.o 00:03:11.349 LINK spdk_nvme_perf 00:03:11.609 CXX test/cpp_headers/env.o 00:03:11.609 CC test/lvol/esnap/esnap.o 00:03:11.609 LINK accel_perf 00:03:11.609 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:11.609 CC examples/blob/hello_world/hello_blob.o 00:03:11.609 CC examples/nvme/hello_world/hello_world.o 00:03:11.609 CXX test/cpp_headers/event.o 00:03:11.609 CC examples/nvme/reconnect/reconnect.o 00:03:11.868 LINK memory_ut 00:03:11.868 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:11.868 CXX test/cpp_headers/fd_group.o 00:03:11.868 CC app/spdk_nvme_discover/discovery_aer.o 00:03:11.868 LINK hello_blob 00:03:11.868 LINK hello_world 00:03:12.126 CXX test/cpp_headers/fd.o 00:03:12.126 LINK reconnect 00:03:12.126 CC test/nvme/aer/aer.o 00:03:12.126 LINK spdk_nvme_discover 00:03:12.384 CXX test/cpp_headers/file.o 00:03:12.384 LINK iscsi_fuzz 00:03:12.384 CC test/nvme/reset/reset.o 00:03:12.384 LINK vhost_fuzz 00:03:12.384 CC examples/blob/cli/blobcli.o 00:03:12.384 CXX test/cpp_headers/fsdev.o 00:03:12.384 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:12.384 CC examples/nvme/arbitration/arbitration.o 00:03:12.642 LINK spdk_nvme_identify 00:03:12.642 LINK aer 00:03:12.642 CC examples/nvme/hotplug/hotplug.o 00:03:12.642 LINK reset 00:03:12.642 CXX test/cpp_headers/fsdev_module.o 00:03:12.900 CC test/nvme/sgl/sgl.o 00:03:12.900 CC examples/bdev/hello_world/hello_bdev.o 00:03:12.900 CC app/spdk_top/spdk_top.o 00:03:12.900 LINK hotplug 00:03:12.900 LINK arbitration 00:03:12.900 CXX test/cpp_headers/ftl.o 00:03:12.900 LINK blobcli 00:03:13.159 CC app/vhost/vhost.o 00:03:13.159 LINK nvme_manage 00:03:13.159 CXX test/cpp_headers/fuse_dispatcher.o 00:03:13.159 CXX test/cpp_headers/gpt_spec.o 00:03:13.159 LINK hello_bdev 00:03:13.159 LINK sgl 00:03:13.159 CXX test/cpp_headers/hexlify.o 00:03:13.159 LINK vhost 00:03:13.418 CC app/spdk_dd/spdk_dd.o 00:03:13.418 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:13.418 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:13.418 CXX test/cpp_headers/histogram_data.o 00:03:13.418 CC examples/nvme/abort/abort.o 00:03:13.418 CC test/nvme/e2edp/nvme_dp.o 00:03:13.418 CC examples/bdev/bdevperf/bdevperf.o 00:03:13.418 CXX test/cpp_headers/idxd.o 00:03:13.676 LINK cmb_copy 00:03:13.676 LINK pmr_persistence 00:03:13.676 CXX test/cpp_headers/idxd_spec.o 00:03:13.676 LINK spdk_dd 00:03:13.676 LINK nvme_dp 00:03:13.934 CC test/bdev/bdevio/bdevio.o 00:03:13.934 CXX test/cpp_headers/init.o 00:03:13.934 CXX test/cpp_headers/ioat.o 00:03:13.934 LINK abort 00:03:13.934 CXX test/cpp_headers/ioat_spec.o 00:03:13.934 CXX test/cpp_headers/iscsi_spec.o 00:03:13.934 CC test/nvme/overhead/overhead.o 00:03:13.934 CXX test/cpp_headers/json.o 00:03:14.193 LINK spdk_top 00:03:14.193 CC test/nvme/err_injection/err_injection.o 00:03:14.193 CXX test/cpp_headers/jsonrpc.o 00:03:14.193 CC app/fio/nvme/fio_plugin.o 00:03:14.193 CC test/nvme/startup/startup.o 00:03:14.193 LINK bdevio 00:03:14.193 CC test/nvme/reserve/reserve.o 00:03:14.452 LINK overhead 00:03:14.452 CXX test/cpp_headers/keyring.o 00:03:14.452 CC test/nvme/simple_copy/simple_copy.o 00:03:14.452 LINK err_injection 00:03:14.452 LINK startup 00:03:14.452 CXX test/cpp_headers/keyring_module.o 00:03:14.452 LINK reserve 00:03:14.710 LINK bdevperf 00:03:14.710 CC app/fio/bdev/fio_plugin.o 00:03:14.710 LINK simple_copy 00:03:14.710 CC test/nvme/connect_stress/connect_stress.o 00:03:14.710 CC test/nvme/boot_partition/boot_partition.o 00:03:14.710 CXX test/cpp_headers/likely.o 00:03:14.710 CC test/nvme/compliance/nvme_compliance.o 00:03:14.969 LINK spdk_nvme 00:03:14.969 LINK connect_stress 00:03:14.969 CXX test/cpp_headers/log.o 00:03:14.969 LINK boot_partition 00:03:14.969 CC test/nvme/fused_ordering/fused_ordering.o 00:03:14.969 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:14.969 CC test/nvme/fdp/fdp.o 00:03:14.969 CC examples/nvmf/nvmf/nvmf.o 00:03:14.969 CXX test/cpp_headers/lvol.o 00:03:15.227 CXX test/cpp_headers/md5.o 00:03:15.227 LINK doorbell_aers 00:03:15.227 LINK fused_ordering 00:03:15.227 CC test/nvme/cuse/cuse.o 00:03:15.227 LINK nvme_compliance 00:03:15.227 CXX test/cpp_headers/memory.o 00:03:15.227 LINK spdk_bdev 00:03:15.227 CXX test/cpp_headers/mmio.o 00:03:15.227 CXX test/cpp_headers/nbd.o 00:03:15.485 CXX test/cpp_headers/net.o 00:03:15.485 CXX test/cpp_headers/notify.o 00:03:15.485 CXX test/cpp_headers/nvme.o 00:03:15.485 LINK nvmf 00:03:15.485 CXX test/cpp_headers/nvme_intel.o 00:03:15.485 CXX test/cpp_headers/nvme_ocssd.o 00:03:15.485 LINK fdp 00:03:15.485 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:15.485 CXX test/cpp_headers/nvme_spec.o 00:03:15.485 CXX test/cpp_headers/nvme_zns.o 00:03:15.485 CXX test/cpp_headers/nvmf_cmd.o 00:03:15.485 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:15.485 CXX test/cpp_headers/nvmf.o 00:03:15.485 CXX test/cpp_headers/nvmf_spec.o 00:03:15.743 CXX test/cpp_headers/nvmf_transport.o 00:03:15.743 CXX test/cpp_headers/opal.o 00:03:15.743 CXX test/cpp_headers/opal_spec.o 00:03:15.743 CXX test/cpp_headers/pci_ids.o 00:03:15.743 CXX test/cpp_headers/pipe.o 00:03:15.743 CXX test/cpp_headers/queue.o 00:03:15.743 CXX test/cpp_headers/reduce.o 00:03:15.743 CXX test/cpp_headers/rpc.o 00:03:15.743 CXX test/cpp_headers/scheduler.o 00:03:15.743 CXX test/cpp_headers/scsi.o 00:03:16.001 CXX test/cpp_headers/sock.o 00:03:16.001 CXX test/cpp_headers/scsi_spec.o 00:03:16.001 CXX test/cpp_headers/stdinc.o 00:03:16.001 CXX test/cpp_headers/string.o 00:03:16.001 CXX test/cpp_headers/thread.o 00:03:16.001 CXX test/cpp_headers/trace.o 00:03:16.001 CXX test/cpp_headers/trace_parser.o 00:03:16.001 CXX test/cpp_headers/tree.o 00:03:16.001 CXX test/cpp_headers/ublk.o 00:03:16.001 CXX test/cpp_headers/util.o 00:03:16.001 CXX test/cpp_headers/uuid.o 00:03:16.001 CXX test/cpp_headers/version.o 00:03:16.001 CXX test/cpp_headers/vfio_user_pci.o 00:03:16.001 CXX test/cpp_headers/vfio_user_spec.o 00:03:16.260 CXX test/cpp_headers/vhost.o 00:03:16.260 CXX test/cpp_headers/vmd.o 00:03:16.260 CXX test/cpp_headers/xor.o 00:03:16.260 CXX test/cpp_headers/zipf.o 00:03:16.826 LINK cuse 00:03:18.205 LINK esnap 00:03:18.774 00:03:18.774 real 1m30.320s 00:03:18.774 user 8m3.835s 00:03:18.774 sys 1m47.281s 00:03:18.774 09:22:07 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:18.774 09:22:07 make -- common/autotest_common.sh@10 -- $ set +x 00:03:18.774 ************************************ 00:03:18.774 END TEST make 00:03:18.774 ************************************ 00:03:18.774 09:22:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:18.774 09:22:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:18.774 09:22:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:18.774 09:22:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:18.774 09:22:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:18.774 09:22:07 -- pm/common@44 -- $ pid=5471 00:03:18.774 09:22:07 -- pm/common@50 -- $ kill -TERM 5471 00:03:18.774 09:22:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:18.774 09:22:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:18.774 09:22:07 -- pm/common@44 -- $ pid=5473 00:03:18.774 09:22:07 -- pm/common@50 -- $ kill -TERM 5473 00:03:18.774 09:22:07 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:18.774 09:22:07 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:18.774 09:22:07 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:18.774 09:22:07 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:18.774 09:22:07 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:19.034 09:22:07 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:19.034 09:22:07 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:19.034 09:22:07 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:19.034 09:22:07 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:19.034 09:22:07 -- scripts/common.sh@336 -- # IFS=.-: 00:03:19.034 09:22:07 -- scripts/common.sh@336 -- # read -ra ver1 00:03:19.034 09:22:07 -- scripts/common.sh@337 -- # IFS=.-: 00:03:19.034 09:22:07 -- scripts/common.sh@337 -- # read -ra ver2 00:03:19.034 09:22:07 -- scripts/common.sh@338 -- # local 'op=<' 00:03:19.034 09:22:07 -- scripts/common.sh@340 -- # ver1_l=2 00:03:19.034 09:22:07 -- scripts/common.sh@341 -- # ver2_l=1 00:03:19.034 09:22:07 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:19.034 09:22:07 -- scripts/common.sh@344 -- # case "$op" in 00:03:19.034 09:22:07 -- scripts/common.sh@345 -- # : 1 00:03:19.035 09:22:07 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:19.035 09:22:07 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:19.035 09:22:07 -- scripts/common.sh@365 -- # decimal 1 00:03:19.035 09:22:07 -- scripts/common.sh@353 -- # local d=1 00:03:19.035 09:22:07 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:19.035 09:22:07 -- scripts/common.sh@355 -- # echo 1 00:03:19.035 09:22:07 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:19.035 09:22:07 -- scripts/common.sh@366 -- # decimal 2 00:03:19.035 09:22:07 -- scripts/common.sh@353 -- # local d=2 00:03:19.035 09:22:07 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:19.035 09:22:07 -- scripts/common.sh@355 -- # echo 2 00:03:19.035 09:22:07 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:19.035 09:22:07 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:19.035 09:22:07 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:19.035 09:22:07 -- scripts/common.sh@368 -- # return 0 00:03:19.035 09:22:07 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:19.035 09:22:07 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:19.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.035 --rc genhtml_branch_coverage=1 00:03:19.035 --rc genhtml_function_coverage=1 00:03:19.035 --rc genhtml_legend=1 00:03:19.035 --rc geninfo_all_blocks=1 00:03:19.035 --rc geninfo_unexecuted_blocks=1 00:03:19.035 00:03:19.035 ' 00:03:19.035 09:22:07 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:19.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.035 --rc genhtml_branch_coverage=1 00:03:19.035 --rc genhtml_function_coverage=1 00:03:19.035 --rc genhtml_legend=1 00:03:19.035 --rc geninfo_all_blocks=1 00:03:19.035 --rc geninfo_unexecuted_blocks=1 00:03:19.035 00:03:19.035 ' 00:03:19.035 09:22:07 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:19.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.035 --rc genhtml_branch_coverage=1 00:03:19.035 --rc genhtml_function_coverage=1 00:03:19.035 --rc genhtml_legend=1 00:03:19.035 --rc geninfo_all_blocks=1 00:03:19.035 --rc geninfo_unexecuted_blocks=1 00:03:19.035 00:03:19.035 ' 00:03:19.035 09:22:07 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:19.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:19.035 --rc genhtml_branch_coverage=1 00:03:19.035 --rc genhtml_function_coverage=1 00:03:19.035 --rc genhtml_legend=1 00:03:19.035 --rc geninfo_all_blocks=1 00:03:19.035 --rc geninfo_unexecuted_blocks=1 00:03:19.035 00:03:19.035 ' 00:03:19.035 09:22:07 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:19.035 09:22:07 -- nvmf/common.sh@7 -- # uname -s 00:03:19.035 09:22:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:19.035 09:22:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:19.035 09:22:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:19.035 09:22:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:19.035 09:22:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:19.035 09:22:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:19.035 09:22:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:19.035 09:22:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:19.035 09:22:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:19.035 09:22:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:19.035 09:22:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19855559-90e4-4c97-8397-e2a0f4af42ac 00:03:19.035 09:22:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=19855559-90e4-4c97-8397-e2a0f4af42ac 00:03:19.035 09:22:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:19.035 09:22:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:19.035 09:22:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:19.035 09:22:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:19.035 09:22:07 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:19.035 09:22:07 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:19.035 09:22:07 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:19.035 09:22:07 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:19.035 09:22:07 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:19.035 09:22:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.035 09:22:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.035 09:22:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.035 09:22:07 -- paths/export.sh@5 -- # export PATH 00:03:19.035 09:22:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.035 09:22:07 -- nvmf/common.sh@51 -- # : 0 00:03:19.035 09:22:07 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:19.035 09:22:07 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:19.035 09:22:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:19.035 09:22:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:19.035 09:22:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:19.035 09:22:07 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:19.035 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:19.035 09:22:07 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:19.035 09:22:07 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:19.035 09:22:07 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:19.035 09:22:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:19.035 09:22:07 -- spdk/autotest.sh@32 -- # uname -s 00:03:19.035 09:22:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:19.035 09:22:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:19.035 09:22:07 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:19.035 09:22:07 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:19.035 09:22:07 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:19.035 09:22:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:19.035 09:22:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:19.035 09:22:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:19.035 09:22:07 -- spdk/autotest.sh@48 -- # udevadm_pid=54472 00:03:19.035 09:22:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:19.035 09:22:07 -- pm/common@17 -- # local monitor 00:03:19.035 09:22:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.035 09:22:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:19.035 09:22:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.035 09:22:07 -- pm/common@25 -- # sleep 1 00:03:19.035 09:22:07 -- pm/common@21 -- # date +%s 00:03:19.035 09:22:07 -- pm/common@21 -- # date +%s 00:03:19.035 09:22:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731662527 00:03:19.035 09:22:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731662527 00:03:19.035 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731662527_collect-cpu-load.pm.log 00:03:19.035 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731662527_collect-vmstat.pm.log 00:03:19.974 09:22:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:19.974 09:22:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:19.974 09:22:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:19.974 09:22:08 -- common/autotest_common.sh@10 -- # set +x 00:03:19.974 09:22:08 -- spdk/autotest.sh@59 -- # create_test_list 00:03:19.974 09:22:08 -- common/autotest_common.sh@750 -- # xtrace_disable 00:03:19.974 09:22:08 -- common/autotest_common.sh@10 -- # set +x 00:03:19.974 09:22:08 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:19.974 09:22:08 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:19.974 09:22:08 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:19.974 09:22:08 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:19.974 09:22:08 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:19.974 09:22:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:19.974 09:22:08 -- common/autotest_common.sh@1455 -- # uname 00:03:20.234 09:22:08 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:20.234 09:22:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:20.234 09:22:08 -- common/autotest_common.sh@1475 -- # uname 00:03:20.234 09:22:08 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:20.234 09:22:08 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:20.234 09:22:08 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:20.234 lcov: LCOV version 1.15 00:03:20.234 09:22:08 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:38.332 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:38.332 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:53.213 09:22:41 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:53.213 09:22:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:53.213 09:22:41 -- common/autotest_common.sh@10 -- # set +x 00:03:53.213 09:22:41 -- spdk/autotest.sh@78 -- # rm -f 00:03:53.213 09:22:41 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:54.150 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:54.150 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:54.150 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:54.150 09:22:42 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:54.150 09:22:42 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:54.150 09:22:42 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:54.150 09:22:42 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:54.150 09:22:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:54.150 09:22:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:54.150 09:22:42 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:54.150 09:22:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:54.150 09:22:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:54.150 09:22:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:54.150 09:22:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:03:54.150 09:22:42 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:03:54.150 09:22:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:54.150 09:22:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:54.150 09:22:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:54.150 09:22:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:03:54.150 09:22:42 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:03:54.150 09:22:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:54.150 09:22:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:54.150 09:22:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:54.150 09:22:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:03:54.150 09:22:42 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:03:54.150 09:22:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:54.150 09:22:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:54.150 09:22:42 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:54.150 09:22:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:54.150 09:22:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:54.150 09:22:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:54.150 09:22:42 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:54.150 09:22:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:54.150 No valid GPT data, bailing 00:03:54.150 09:22:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:54.150 09:22:42 -- scripts/common.sh@394 -- # pt= 00:03:54.150 09:22:42 -- scripts/common.sh@395 -- # return 1 00:03:54.150 09:22:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:54.150 1+0 records in 00:03:54.150 1+0 records out 00:03:54.150 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00404721 s, 259 MB/s 00:03:54.150 09:22:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:54.150 09:22:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:54.150 09:22:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:54.150 09:22:42 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:54.150 09:22:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:54.150 No valid GPT data, bailing 00:03:54.409 09:22:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:54.409 09:22:42 -- scripts/common.sh@394 -- # pt= 00:03:54.409 09:22:42 -- scripts/common.sh@395 -- # return 1 00:03:54.409 09:22:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:54.409 1+0 records in 00:03:54.409 1+0 records out 00:03:54.409 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00670403 s, 156 MB/s 00:03:54.409 09:22:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:54.409 09:22:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:54.409 09:22:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:54.409 09:22:42 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:54.409 09:22:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:54.409 No valid GPT data, bailing 00:03:54.409 09:22:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:54.409 09:22:42 -- scripts/common.sh@394 -- # pt= 00:03:54.409 09:22:42 -- scripts/common.sh@395 -- # return 1 00:03:54.409 09:22:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:54.409 1+0 records in 00:03:54.409 1+0 records out 00:03:54.409 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00642294 s, 163 MB/s 00:03:54.409 09:22:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:54.409 09:22:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:54.409 09:22:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:54.409 09:22:42 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:54.409 09:22:42 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:54.409 No valid GPT data, bailing 00:03:54.409 09:22:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:54.409 09:22:42 -- scripts/common.sh@394 -- # pt= 00:03:54.409 09:22:42 -- scripts/common.sh@395 -- # return 1 00:03:54.409 09:22:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:54.409 1+0 records in 00:03:54.409 1+0 records out 00:03:54.409 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00529334 s, 198 MB/s 00:03:54.409 09:22:42 -- spdk/autotest.sh@105 -- # sync 00:03:54.668 09:22:42 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:54.668 09:22:42 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:54.668 09:22:42 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:57.959 09:22:45 -- spdk/autotest.sh@111 -- # uname -s 00:03:57.959 09:22:45 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:57.959 09:22:45 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:57.959 09:22:45 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:58.217 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:58.217 Hugepages 00:03:58.217 node hugesize free / total 00:03:58.217 node0 1048576kB 0 / 0 00:03:58.217 node0 2048kB 0 / 0 00:03:58.217 00:03:58.217 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:58.217 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:58.476 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:58.476 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:58.476 09:22:46 -- spdk/autotest.sh@117 -- # uname -s 00:03:58.476 09:22:46 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:58.476 09:22:46 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:58.476 09:22:46 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:59.414 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.414 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:59.414 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:59.414 09:22:47 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:00.793 09:22:48 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:00.793 09:22:48 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:00.793 09:22:48 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:00.793 09:22:48 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:00.793 09:22:48 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:00.793 09:22:48 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:00.793 09:22:48 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:00.793 09:22:48 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:00.793 09:22:48 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:00.793 09:22:48 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:00.793 09:22:48 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:00.793 09:22:48 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.793 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.793 Waiting for block devices as requested 00:04:00.793 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:01.052 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:01.052 09:22:49 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:01.052 09:22:49 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:01.052 09:22:49 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:01.052 09:22:49 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:01.052 09:22:49 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:01.052 09:22:49 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:01.052 09:22:49 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:01.052 09:22:49 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:01.052 09:22:49 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:01.052 09:22:49 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:01.052 09:22:49 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:01.052 09:22:49 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:01.052 09:22:49 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:01.052 09:22:49 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:01.052 09:22:49 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:01.052 09:22:49 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:01.052 09:22:49 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:01.052 09:22:49 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:01.052 09:22:49 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:01.052 09:22:49 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:01.052 09:22:49 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:01.052 09:22:49 -- common/autotest_common.sh@1541 -- # continue 00:04:01.052 09:22:49 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:01.052 09:22:49 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:01.052 09:22:49 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:01.052 09:22:49 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:01.052 09:22:49 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:01.052 09:22:49 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:01.052 09:22:49 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:01.052 09:22:49 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:01.052 09:22:49 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:01.052 09:22:49 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:01.052 09:22:49 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:01.052 09:22:49 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:01.052 09:22:49 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:01.052 09:22:49 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:01.052 09:22:49 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:01.052 09:22:49 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:01.052 09:22:49 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:01.052 09:22:49 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:01.052 09:22:49 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:01.311 09:22:49 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:01.311 09:22:49 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:01.311 09:22:49 -- common/autotest_common.sh@1541 -- # continue 00:04:01.311 09:22:49 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:01.311 09:22:49 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:01.311 09:22:49 -- common/autotest_common.sh@10 -- # set +x 00:04:01.311 09:22:49 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:01.311 09:22:49 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:01.311 09:22:49 -- common/autotest_common.sh@10 -- # set +x 00:04:01.311 09:22:49 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.880 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.139 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.140 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.140 09:22:50 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:02.140 09:22:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:02.140 09:22:50 -- common/autotest_common.sh@10 -- # set +x 00:04:02.140 09:22:50 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:02.140 09:22:50 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:02.140 09:22:50 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:02.140 09:22:50 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:02.140 09:22:50 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:02.140 09:22:50 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:02.140 09:22:50 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:02.140 09:22:50 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:02.140 09:22:50 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:02.140 09:22:50 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:02.140 09:22:50 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:02.140 09:22:50 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:02.140 09:22:50 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:02.399 09:22:50 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:02.399 09:22:50 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:02.399 09:22:50 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:02.399 09:22:50 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:02.399 09:22:50 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:02.399 09:22:50 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:02.399 09:22:50 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:02.399 09:22:50 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:02.399 09:22:50 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:02.399 09:22:50 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:02.399 09:22:50 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:02.399 09:22:50 -- common/autotest_common.sh@1570 -- # return 0 00:04:02.399 09:22:50 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:02.399 09:22:50 -- common/autotest_common.sh@1578 -- # return 0 00:04:02.399 09:22:50 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:02.399 09:22:50 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:02.399 09:22:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:02.399 09:22:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:02.399 09:22:50 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:02.399 09:22:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:02.399 09:22:50 -- common/autotest_common.sh@10 -- # set +x 00:04:02.399 09:22:50 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:02.399 09:22:50 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:02.399 09:22:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:02.399 09:22:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:02.400 09:22:50 -- common/autotest_common.sh@10 -- # set +x 00:04:02.400 ************************************ 00:04:02.400 START TEST env 00:04:02.400 ************************************ 00:04:02.400 09:22:50 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:02.400 * Looking for test storage... 00:04:02.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:02.400 09:22:50 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:02.400 09:22:50 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:02.400 09:22:50 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:02.659 09:22:50 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:02.659 09:22:50 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.659 09:22:50 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.659 09:22:50 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.659 09:22:50 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.659 09:22:50 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.659 09:22:50 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.659 09:22:50 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.659 09:22:50 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.659 09:22:50 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.659 09:22:50 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.659 09:22:50 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.659 09:22:50 env -- scripts/common.sh@344 -- # case "$op" in 00:04:02.659 09:22:50 env -- scripts/common.sh@345 -- # : 1 00:04:02.659 09:22:50 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.659 09:22:50 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.659 09:22:50 env -- scripts/common.sh@365 -- # decimal 1 00:04:02.659 09:22:50 env -- scripts/common.sh@353 -- # local d=1 00:04:02.659 09:22:50 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.659 09:22:50 env -- scripts/common.sh@355 -- # echo 1 00:04:02.659 09:22:50 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.659 09:22:50 env -- scripts/common.sh@366 -- # decimal 2 00:04:02.659 09:22:50 env -- scripts/common.sh@353 -- # local d=2 00:04:02.659 09:22:50 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.659 09:22:50 env -- scripts/common.sh@355 -- # echo 2 00:04:02.659 09:22:50 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.659 09:22:50 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.659 09:22:50 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.659 09:22:50 env -- scripts/common.sh@368 -- # return 0 00:04:02.659 09:22:50 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.659 09:22:50 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:02.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.659 --rc genhtml_branch_coverage=1 00:04:02.659 --rc genhtml_function_coverage=1 00:04:02.659 --rc genhtml_legend=1 00:04:02.659 --rc geninfo_all_blocks=1 00:04:02.659 --rc geninfo_unexecuted_blocks=1 00:04:02.659 00:04:02.659 ' 00:04:02.659 09:22:50 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:02.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.659 --rc genhtml_branch_coverage=1 00:04:02.659 --rc genhtml_function_coverage=1 00:04:02.659 --rc genhtml_legend=1 00:04:02.659 --rc geninfo_all_blocks=1 00:04:02.659 --rc geninfo_unexecuted_blocks=1 00:04:02.659 00:04:02.659 ' 00:04:02.659 09:22:50 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:02.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.659 --rc genhtml_branch_coverage=1 00:04:02.660 --rc genhtml_function_coverage=1 00:04:02.660 --rc genhtml_legend=1 00:04:02.660 --rc geninfo_all_blocks=1 00:04:02.660 --rc geninfo_unexecuted_blocks=1 00:04:02.660 00:04:02.660 ' 00:04:02.660 09:22:50 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:02.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.660 --rc genhtml_branch_coverage=1 00:04:02.660 --rc genhtml_function_coverage=1 00:04:02.660 --rc genhtml_legend=1 00:04:02.660 --rc geninfo_all_blocks=1 00:04:02.660 --rc geninfo_unexecuted_blocks=1 00:04:02.660 00:04:02.660 ' 00:04:02.660 09:22:50 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:02.660 09:22:50 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:02.660 09:22:50 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:02.660 09:22:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.660 ************************************ 00:04:02.660 START TEST env_memory 00:04:02.660 ************************************ 00:04:02.660 09:22:50 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:02.660 00:04:02.660 00:04:02.660 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.660 http://cunit.sourceforge.net/ 00:04:02.660 00:04:02.660 00:04:02.660 Suite: memory 00:04:02.660 Test: alloc and free memory map ...[2024-11-15 09:22:51.008772] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:02.660 passed 00:04:02.660 Test: mem map translation ...[2024-11-15 09:22:51.063764] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:02.660 [2024-11-15 09:22:51.063856] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:02.660 [2024-11-15 09:22:51.063943] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:02.660 [2024-11-15 09:22:51.063967] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:02.918 passed 00:04:02.918 Test: mem map registration ...[2024-11-15 09:22:51.148122] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:02.918 [2024-11-15 09:22:51.148217] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:02.918 passed 00:04:02.918 Test: mem map adjacent registrations ...passed 00:04:02.918 00:04:02.918 Run Summary: Type Total Ran Passed Failed Inactive 00:04:02.918 suites 1 1 n/a 0 0 00:04:02.918 tests 4 4 4 0 0 00:04:02.918 asserts 152 152 152 0 n/a 00:04:02.918 00:04:02.918 Elapsed time = 0.299 seconds 00:04:02.918 00:04:02.918 real 0m0.339s 00:04:02.918 user 0m0.304s 00:04:02.918 sys 0m0.027s 00:04:02.918 09:22:51 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:02.918 09:22:51 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:02.918 ************************************ 00:04:02.918 END TEST env_memory 00:04:02.918 ************************************ 00:04:02.918 09:22:51 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:02.918 09:22:51 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:02.918 09:22:51 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:02.918 09:22:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.918 ************************************ 00:04:02.918 START TEST env_vtophys 00:04:02.918 ************************************ 00:04:02.918 09:22:51 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:03.178 EAL: lib.eal log level changed from notice to debug 00:04:03.178 EAL: Detected lcore 0 as core 0 on socket 0 00:04:03.178 EAL: Detected lcore 1 as core 0 on socket 0 00:04:03.178 EAL: Detected lcore 2 as core 0 on socket 0 00:04:03.178 EAL: Detected lcore 3 as core 0 on socket 0 00:04:03.178 EAL: Detected lcore 4 as core 0 on socket 0 00:04:03.178 EAL: Detected lcore 5 as core 0 on socket 0 00:04:03.178 EAL: Detected lcore 6 as core 0 on socket 0 00:04:03.178 EAL: Detected lcore 7 as core 0 on socket 0 00:04:03.178 EAL: Detected lcore 8 as core 0 on socket 0 00:04:03.178 EAL: Detected lcore 9 as core 0 on socket 0 00:04:03.178 EAL: Maximum logical cores by configuration: 128 00:04:03.178 EAL: Detected CPU lcores: 10 00:04:03.178 EAL: Detected NUMA nodes: 1 00:04:03.178 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:03.178 EAL: Detected shared linkage of DPDK 00:04:03.178 EAL: No shared files mode enabled, IPC will be disabled 00:04:03.178 EAL: Selected IOVA mode 'PA' 00:04:03.178 EAL: Probing VFIO support... 00:04:03.178 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:03.178 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:03.178 EAL: Ask a virtual area of 0x2e000 bytes 00:04:03.178 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:03.178 EAL: Setting up physically contiguous memory... 00:04:03.178 EAL: Setting maximum number of open files to 524288 00:04:03.178 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:03.178 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:03.178 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.178 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:03.178 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.178 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.178 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:03.178 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:03.178 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.178 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:03.178 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.178 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.178 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:03.178 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:03.178 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.178 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:03.178 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.178 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.178 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:03.178 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:03.178 EAL: Ask a virtual area of 0x61000 bytes 00:04:03.178 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:03.178 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:03.178 EAL: Ask a virtual area of 0x400000000 bytes 00:04:03.178 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:03.178 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:03.178 EAL: Hugepages will be freed exactly as allocated. 00:04:03.178 EAL: No shared files mode enabled, IPC is disabled 00:04:03.178 EAL: No shared files mode enabled, IPC is disabled 00:04:03.178 EAL: TSC frequency is ~2290000 KHz 00:04:03.178 EAL: Main lcore 0 is ready (tid=7f64340bda40;cpuset=[0]) 00:04:03.178 EAL: Trying to obtain current memory policy. 00:04:03.178 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.178 EAL: Restoring previous memory policy: 0 00:04:03.178 EAL: request: mp_malloc_sync 00:04:03.178 EAL: No shared files mode enabled, IPC is disabled 00:04:03.178 EAL: Heap on socket 0 was expanded by 2MB 00:04:03.178 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:03.178 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:03.178 EAL: Mem event callback 'spdk:(nil)' registered 00:04:03.178 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:03.178 00:04:03.178 00:04:03.178 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.178 http://cunit.sourceforge.net/ 00:04:03.178 00:04:03.178 00:04:03.178 Suite: components_suite 00:04:03.746 Test: vtophys_malloc_test ...passed 00:04:03.746 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:03.746 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.746 EAL: Restoring previous memory policy: 4 00:04:03.746 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.746 EAL: request: mp_malloc_sync 00:04:03.746 EAL: No shared files mode enabled, IPC is disabled 00:04:03.746 EAL: Heap on socket 0 was expanded by 4MB 00:04:03.746 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.746 EAL: request: mp_malloc_sync 00:04:03.746 EAL: No shared files mode enabled, IPC is disabled 00:04:03.746 EAL: Heap on socket 0 was shrunk by 4MB 00:04:03.746 EAL: Trying to obtain current memory policy. 00:04:03.746 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.746 EAL: Restoring previous memory policy: 4 00:04:03.746 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.746 EAL: request: mp_malloc_sync 00:04:03.746 EAL: No shared files mode enabled, IPC is disabled 00:04:03.746 EAL: Heap on socket 0 was expanded by 6MB 00:04:03.746 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.746 EAL: request: mp_malloc_sync 00:04:03.746 EAL: No shared files mode enabled, IPC is disabled 00:04:03.746 EAL: Heap on socket 0 was shrunk by 6MB 00:04:03.746 EAL: Trying to obtain current memory policy. 00:04:03.746 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.746 EAL: Restoring previous memory policy: 4 00:04:03.746 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.746 EAL: request: mp_malloc_sync 00:04:03.746 EAL: No shared files mode enabled, IPC is disabled 00:04:03.746 EAL: Heap on socket 0 was expanded by 10MB 00:04:03.746 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.746 EAL: request: mp_malloc_sync 00:04:03.746 EAL: No shared files mode enabled, IPC is disabled 00:04:03.746 EAL: Heap on socket 0 was shrunk by 10MB 00:04:03.746 EAL: Trying to obtain current memory policy. 00:04:03.746 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.746 EAL: Restoring previous memory policy: 4 00:04:03.746 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.746 EAL: request: mp_malloc_sync 00:04:03.746 EAL: No shared files mode enabled, IPC is disabled 00:04:03.746 EAL: Heap on socket 0 was expanded by 18MB 00:04:03.746 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.746 EAL: request: mp_malloc_sync 00:04:03.746 EAL: No shared files mode enabled, IPC is disabled 00:04:03.746 EAL: Heap on socket 0 was shrunk by 18MB 00:04:03.746 EAL: Trying to obtain current memory policy. 00:04:03.746 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.746 EAL: Restoring previous memory policy: 4 00:04:03.746 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.746 EAL: request: mp_malloc_sync 00:04:03.746 EAL: No shared files mode enabled, IPC is disabled 00:04:03.746 EAL: Heap on socket 0 was expanded by 34MB 00:04:03.746 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.746 EAL: request: mp_malloc_sync 00:04:03.746 EAL: No shared files mode enabled, IPC is disabled 00:04:03.746 EAL: Heap on socket 0 was shrunk by 34MB 00:04:04.005 EAL: Trying to obtain current memory policy. 00:04:04.005 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.005 EAL: Restoring previous memory policy: 4 00:04:04.005 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.005 EAL: request: mp_malloc_sync 00:04:04.005 EAL: No shared files mode enabled, IPC is disabled 00:04:04.005 EAL: Heap on socket 0 was expanded by 66MB 00:04:04.005 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.005 EAL: request: mp_malloc_sync 00:04:04.005 EAL: No shared files mode enabled, IPC is disabled 00:04:04.005 EAL: Heap on socket 0 was shrunk by 66MB 00:04:04.264 EAL: Trying to obtain current memory policy. 00:04:04.264 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.264 EAL: Restoring previous memory policy: 4 00:04:04.264 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.264 EAL: request: mp_malloc_sync 00:04:04.264 EAL: No shared files mode enabled, IPC is disabled 00:04:04.264 EAL: Heap on socket 0 was expanded by 130MB 00:04:04.522 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.522 EAL: request: mp_malloc_sync 00:04:04.522 EAL: No shared files mode enabled, IPC is disabled 00:04:04.522 EAL: Heap on socket 0 was shrunk by 130MB 00:04:04.781 EAL: Trying to obtain current memory policy. 00:04:04.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.781 EAL: Restoring previous memory policy: 4 00:04:04.781 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.781 EAL: request: mp_malloc_sync 00:04:04.781 EAL: No shared files mode enabled, IPC is disabled 00:04:04.781 EAL: Heap on socket 0 was expanded by 258MB 00:04:05.348 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.348 EAL: request: mp_malloc_sync 00:04:05.348 EAL: No shared files mode enabled, IPC is disabled 00:04:05.348 EAL: Heap on socket 0 was shrunk by 258MB 00:04:05.915 EAL: Trying to obtain current memory policy. 00:04:05.915 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.915 EAL: Restoring previous memory policy: 4 00:04:05.915 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.915 EAL: request: mp_malloc_sync 00:04:05.915 EAL: No shared files mode enabled, IPC is disabled 00:04:05.915 EAL: Heap on socket 0 was expanded by 514MB 00:04:06.853 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.112 EAL: request: mp_malloc_sync 00:04:07.112 EAL: No shared files mode enabled, IPC is disabled 00:04:07.112 EAL: Heap on socket 0 was shrunk by 514MB 00:04:08.054 EAL: Trying to obtain current memory policy. 00:04:08.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.320 EAL: Restoring previous memory policy: 4 00:04:08.320 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.320 EAL: request: mp_malloc_sync 00:04:08.320 EAL: No shared files mode enabled, IPC is disabled 00:04:08.320 EAL: Heap on socket 0 was expanded by 1026MB 00:04:10.238 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.497 EAL: request: mp_malloc_sync 00:04:10.497 EAL: No shared files mode enabled, IPC is disabled 00:04:10.497 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:12.440 passed 00:04:12.440 00:04:12.440 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.440 suites 1 1 n/a 0 0 00:04:12.440 tests 2 2 2 0 0 00:04:12.440 asserts 5635 5635 5635 0 n/a 00:04:12.440 00:04:12.440 Elapsed time = 9.218 seconds 00:04:12.440 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.440 EAL: request: mp_malloc_sync 00:04:12.440 EAL: No shared files mode enabled, IPC is disabled 00:04:12.440 EAL: Heap on socket 0 was shrunk by 2MB 00:04:12.440 EAL: No shared files mode enabled, IPC is disabled 00:04:12.440 EAL: No shared files mode enabled, IPC is disabled 00:04:12.440 EAL: No shared files mode enabled, IPC is disabled 00:04:12.440 00:04:12.440 real 0m9.562s 00:04:12.440 user 0m8.534s 00:04:12.440 sys 0m0.861s 00:04:12.440 09:23:00 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:12.440 09:23:00 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:12.440 ************************************ 00:04:12.440 END TEST env_vtophys 00:04:12.440 ************************************ 00:04:12.700 09:23:00 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:12.700 09:23:00 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:12.700 09:23:00 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:12.700 09:23:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.700 ************************************ 00:04:12.700 START TEST env_pci 00:04:12.700 ************************************ 00:04:12.700 09:23:00 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:12.700 00:04:12.700 00:04:12.700 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.700 http://cunit.sourceforge.net/ 00:04:12.700 00:04:12.700 00:04:12.700 Suite: pci 00:04:12.700 Test: pci_hook ...[2024-11-15 09:23:00.993887] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56848 has claimed it 00:04:12.700 passed 00:04:12.700 00:04:12.700 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.700 suites 1 1 n/a 0 0 00:04:12.700 tests 1 1 1 0 0 00:04:12.700 asserts 25 25 25 0 n/a 00:04:12.700 00:04:12.700 Elapsed time = 0.007 seconds 00:04:12.700 EAL: Cannot find device (10000:00:01.0) 00:04:12.700 EAL: Failed to attach device on primary process 00:04:12.700 00:04:12.700 real 0m0.092s 00:04:12.700 user 0m0.036s 00:04:12.700 sys 0m0.055s 00:04:12.700 09:23:01 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:12.700 09:23:01 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:12.700 ************************************ 00:04:12.700 END TEST env_pci 00:04:12.700 ************************************ 00:04:12.700 09:23:01 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:12.700 09:23:01 env -- env/env.sh@15 -- # uname 00:04:12.700 09:23:01 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:12.700 09:23:01 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:12.700 09:23:01 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:12.700 09:23:01 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:12.700 09:23:01 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:12.700 09:23:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.700 ************************************ 00:04:12.700 START TEST env_dpdk_post_init 00:04:12.700 ************************************ 00:04:12.700 09:23:01 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:12.700 EAL: Detected CPU lcores: 10 00:04:12.700 EAL: Detected NUMA nodes: 1 00:04:12.700 EAL: Detected shared linkage of DPDK 00:04:12.959 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:12.959 EAL: Selected IOVA mode 'PA' 00:04:12.959 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:12.959 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:12.959 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:12.959 Starting DPDK initialization... 00:04:12.959 Starting SPDK post initialization... 00:04:12.959 SPDK NVMe probe 00:04:12.959 Attaching to 0000:00:10.0 00:04:12.959 Attaching to 0000:00:11.0 00:04:12.959 Attached to 0000:00:10.0 00:04:12.959 Attached to 0000:00:11.0 00:04:12.959 Cleaning up... 00:04:12.959 00:04:12.959 real 0m0.308s 00:04:12.959 user 0m0.107s 00:04:12.959 sys 0m0.100s 00:04:12.959 09:23:01 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:12.959 09:23:01 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:12.959 ************************************ 00:04:12.959 END TEST env_dpdk_post_init 00:04:12.959 ************************************ 00:04:13.216 09:23:01 env -- env/env.sh@26 -- # uname 00:04:13.216 09:23:01 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:13.216 09:23:01 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:13.216 09:23:01 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:13.216 09:23:01 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:13.216 09:23:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.216 ************************************ 00:04:13.216 START TEST env_mem_callbacks 00:04:13.216 ************************************ 00:04:13.216 09:23:01 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:13.216 EAL: Detected CPU lcores: 10 00:04:13.217 EAL: Detected NUMA nodes: 1 00:04:13.217 EAL: Detected shared linkage of DPDK 00:04:13.217 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:13.217 EAL: Selected IOVA mode 'PA' 00:04:13.217 00:04:13.217 00:04:13.217 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.217 http://cunit.sourceforge.net/ 00:04:13.217 00:04:13.217 00:04:13.217 Suite: memory 00:04:13.217 Test: test ... 00:04:13.217 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:13.217 register 0x200000200000 2097152 00:04:13.217 malloc 3145728 00:04:13.217 register 0x200000400000 4194304 00:04:13.217 buf 0x2000004fffc0 len 3145728 PASSED 00:04:13.217 malloc 64 00:04:13.217 buf 0x2000004ffec0 len 64 PASSED 00:04:13.217 malloc 4194304 00:04:13.217 register 0x200000800000 6291456 00:04:13.217 buf 0x2000009fffc0 len 4194304 PASSED 00:04:13.217 free 0x2000004fffc0 3145728 00:04:13.217 free 0x2000004ffec0 64 00:04:13.217 unregister 0x200000400000 4194304 PASSED 00:04:13.217 free 0x2000009fffc0 4194304 00:04:13.474 unregister 0x200000800000 6291456 PASSED 00:04:13.474 malloc 8388608 00:04:13.474 register 0x200000400000 10485760 00:04:13.474 buf 0x2000005fffc0 len 8388608 PASSED 00:04:13.474 free 0x2000005fffc0 8388608 00:04:13.474 unregister 0x200000400000 10485760 PASSED 00:04:13.474 passed 00:04:13.474 00:04:13.474 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.474 suites 1 1 n/a 0 0 00:04:13.474 tests 1 1 1 0 0 00:04:13.474 asserts 15 15 15 0 n/a 00:04:13.474 00:04:13.474 Elapsed time = 0.091 seconds 00:04:13.474 00:04:13.474 real 0m0.300s 00:04:13.474 user 0m0.114s 00:04:13.474 sys 0m0.084s 00:04:13.474 09:23:01 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:13.474 09:23:01 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:13.474 ************************************ 00:04:13.474 END TEST env_mem_callbacks 00:04:13.474 ************************************ 00:04:13.474 00:04:13.474 real 0m11.111s 00:04:13.474 user 0m9.311s 00:04:13.474 sys 0m1.437s 00:04:13.474 09:23:01 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:13.474 09:23:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.474 ************************************ 00:04:13.474 END TEST env 00:04:13.474 ************************************ 00:04:13.474 09:23:01 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:13.474 09:23:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:13.474 09:23:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:13.474 09:23:01 -- common/autotest_common.sh@10 -- # set +x 00:04:13.475 ************************************ 00:04:13.475 START TEST rpc 00:04:13.475 ************************************ 00:04:13.475 09:23:01 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:13.733 * Looking for test storage... 00:04:13.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:13.733 09:23:01 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:13.733 09:23:01 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:13.733 09:23:01 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:13.733 09:23:02 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:13.733 09:23:02 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:13.733 09:23:02 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:13.733 09:23:02 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:13.733 09:23:02 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.733 09:23:02 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:13.733 09:23:02 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:13.733 09:23:02 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:13.733 09:23:02 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:13.733 09:23:02 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:13.733 09:23:02 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:13.733 09:23:02 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:13.733 09:23:02 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:13.733 09:23:02 rpc -- scripts/common.sh@345 -- # : 1 00:04:13.733 09:23:02 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:13.733 09:23:02 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.733 09:23:02 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:13.733 09:23:02 rpc -- scripts/common.sh@353 -- # local d=1 00:04:13.733 09:23:02 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.733 09:23:02 rpc -- scripts/common.sh@355 -- # echo 1 00:04:13.733 09:23:02 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:13.733 09:23:02 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:13.733 09:23:02 rpc -- scripts/common.sh@353 -- # local d=2 00:04:13.733 09:23:02 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.733 09:23:02 rpc -- scripts/common.sh@355 -- # echo 2 00:04:13.733 09:23:02 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:13.733 09:23:02 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:13.733 09:23:02 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:13.733 09:23:02 rpc -- scripts/common.sh@368 -- # return 0 00:04:13.733 09:23:02 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.733 09:23:02 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:13.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.733 --rc genhtml_branch_coverage=1 00:04:13.733 --rc genhtml_function_coverage=1 00:04:13.733 --rc genhtml_legend=1 00:04:13.733 --rc geninfo_all_blocks=1 00:04:13.734 --rc geninfo_unexecuted_blocks=1 00:04:13.734 00:04:13.734 ' 00:04:13.734 09:23:02 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:13.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.734 --rc genhtml_branch_coverage=1 00:04:13.734 --rc genhtml_function_coverage=1 00:04:13.734 --rc genhtml_legend=1 00:04:13.734 --rc geninfo_all_blocks=1 00:04:13.734 --rc geninfo_unexecuted_blocks=1 00:04:13.734 00:04:13.734 ' 00:04:13.734 09:23:02 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:13.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.734 --rc genhtml_branch_coverage=1 00:04:13.734 --rc genhtml_function_coverage=1 00:04:13.734 --rc genhtml_legend=1 00:04:13.734 --rc geninfo_all_blocks=1 00:04:13.734 --rc geninfo_unexecuted_blocks=1 00:04:13.734 00:04:13.734 ' 00:04:13.734 09:23:02 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:13.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.734 --rc genhtml_branch_coverage=1 00:04:13.734 --rc genhtml_function_coverage=1 00:04:13.734 --rc genhtml_legend=1 00:04:13.734 --rc geninfo_all_blocks=1 00:04:13.734 --rc geninfo_unexecuted_blocks=1 00:04:13.734 00:04:13.734 ' 00:04:13.734 09:23:02 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:13.734 09:23:02 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56975 00:04:13.734 09:23:02 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.734 09:23:02 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56975 00:04:13.734 09:23:02 rpc -- common/autotest_common.sh@833 -- # '[' -z 56975 ']' 00:04:13.734 09:23:02 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.734 09:23:02 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:13.734 09:23:02 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.734 09:23:02 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:13.734 09:23:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.993 [2024-11-15 09:23:02.219558] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:04:13.993 [2024-11-15 09:23:02.219792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56975 ] 00:04:13.993 [2024-11-15 09:23:02.409708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.253 [2024-11-15 09:23:02.605246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:14.253 [2024-11-15 09:23:02.605350] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56975' to capture a snapshot of events at runtime. 00:04:14.253 [2024-11-15 09:23:02.605363] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:14.253 [2024-11-15 09:23:02.605385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:14.253 [2024-11-15 09:23:02.605397] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56975 for offline analysis/debug. 00:04:14.253 [2024-11-15 09:23:02.607024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.632 09:23:03 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:15.632 09:23:03 rpc -- common/autotest_common.sh@866 -- # return 0 00:04:15.632 09:23:03 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:15.632 09:23:03 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:15.632 09:23:03 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:15.632 09:23:03 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:15.632 09:23:03 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:15.632 09:23:03 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:15.632 09:23:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.632 ************************************ 00:04:15.632 START TEST rpc_integrity 00:04:15.632 ************************************ 00:04:15.632 09:23:03 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:15.632 09:23:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:15.632 09:23:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.632 09:23:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.632 09:23:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.632 09:23:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:15.632 09:23:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:15.632 09:23:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:15.632 09:23:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:15.632 09:23:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.632 09:23:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.632 09:23:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.632 09:23:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:15.632 09:23:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:15.632 09:23:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.632 09:23:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.632 09:23:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.632 09:23:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:15.632 { 00:04:15.632 "name": "Malloc0", 00:04:15.632 "aliases": [ 00:04:15.632 "895deb85-87d0-48f4-8f02-654bbd3586d5" 00:04:15.632 ], 00:04:15.632 "product_name": "Malloc disk", 00:04:15.632 "block_size": 512, 00:04:15.632 "num_blocks": 16384, 00:04:15.632 "uuid": "895deb85-87d0-48f4-8f02-654bbd3586d5", 00:04:15.632 "assigned_rate_limits": { 00:04:15.632 "rw_ios_per_sec": 0, 00:04:15.632 "rw_mbytes_per_sec": 0, 00:04:15.632 "r_mbytes_per_sec": 0, 00:04:15.632 "w_mbytes_per_sec": 0 00:04:15.632 }, 00:04:15.632 "claimed": false, 00:04:15.632 "zoned": false, 00:04:15.632 "supported_io_types": { 00:04:15.632 "read": true, 00:04:15.632 "write": true, 00:04:15.632 "unmap": true, 00:04:15.632 "flush": true, 00:04:15.632 "reset": true, 00:04:15.632 "nvme_admin": false, 00:04:15.632 "nvme_io": false, 00:04:15.632 "nvme_io_md": false, 00:04:15.632 "write_zeroes": true, 00:04:15.632 "zcopy": true, 00:04:15.632 "get_zone_info": false, 00:04:15.632 "zone_management": false, 00:04:15.632 "zone_append": false, 00:04:15.632 "compare": false, 00:04:15.632 "compare_and_write": false, 00:04:15.632 "abort": true, 00:04:15.632 "seek_hole": false, 00:04:15.632 "seek_data": false, 00:04:15.632 "copy": true, 00:04:15.632 "nvme_iov_md": false 00:04:15.632 }, 00:04:15.632 "memory_domains": [ 00:04:15.632 { 00:04:15.632 "dma_device_id": "system", 00:04:15.632 "dma_device_type": 1 00:04:15.632 }, 00:04:15.632 { 00:04:15.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.632 "dma_device_type": 2 00:04:15.632 } 00:04:15.632 ], 00:04:15.632 "driver_specific": {} 00:04:15.632 } 00:04:15.632 ]' 00:04:15.632 09:23:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:15.632 09:23:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:15.632 09:23:03 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:15.632 09:23:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.632 09:23:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.632 [2024-11-15 09:23:03.993028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:15.632 [2024-11-15 09:23:03.993126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:15.632 [2024-11-15 09:23:03.993159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:15.632 [2024-11-15 09:23:03.993179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:15.632 [2024-11-15 09:23:03.996456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:15.632 [2024-11-15 09:23:03.996520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:15.632 Passthru0 00:04:15.632 09:23:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.632 09:23:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:15.632 09:23:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.632 09:23:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.632 09:23:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.632 09:23:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:15.632 { 00:04:15.632 "name": "Malloc0", 00:04:15.632 "aliases": [ 00:04:15.632 "895deb85-87d0-48f4-8f02-654bbd3586d5" 00:04:15.632 ], 00:04:15.632 "product_name": "Malloc disk", 00:04:15.632 "block_size": 512, 00:04:15.632 "num_blocks": 16384, 00:04:15.632 "uuid": "895deb85-87d0-48f4-8f02-654bbd3586d5", 00:04:15.632 "assigned_rate_limits": { 00:04:15.632 "rw_ios_per_sec": 0, 00:04:15.632 "rw_mbytes_per_sec": 0, 00:04:15.632 "r_mbytes_per_sec": 0, 00:04:15.632 "w_mbytes_per_sec": 0 00:04:15.632 }, 00:04:15.632 "claimed": true, 00:04:15.632 "claim_type": "exclusive_write", 00:04:15.632 "zoned": false, 00:04:15.632 "supported_io_types": { 00:04:15.632 "read": true, 00:04:15.632 "write": true, 00:04:15.632 "unmap": true, 00:04:15.632 "flush": true, 00:04:15.632 "reset": true, 00:04:15.632 "nvme_admin": false, 00:04:15.632 "nvme_io": false, 00:04:15.632 "nvme_io_md": false, 00:04:15.632 "write_zeroes": true, 00:04:15.632 "zcopy": true, 00:04:15.632 "get_zone_info": false, 00:04:15.632 "zone_management": false, 00:04:15.632 "zone_append": false, 00:04:15.632 "compare": false, 00:04:15.632 "compare_and_write": false, 00:04:15.632 "abort": true, 00:04:15.632 "seek_hole": false, 00:04:15.632 "seek_data": false, 00:04:15.632 "copy": true, 00:04:15.632 "nvme_iov_md": false 00:04:15.632 }, 00:04:15.632 "memory_domains": [ 00:04:15.632 { 00:04:15.632 "dma_device_id": "system", 00:04:15.632 "dma_device_type": 1 00:04:15.632 }, 00:04:15.632 { 00:04:15.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.632 "dma_device_type": 2 00:04:15.632 } 00:04:15.632 ], 00:04:15.633 "driver_specific": {} 00:04:15.633 }, 00:04:15.633 { 00:04:15.633 "name": "Passthru0", 00:04:15.633 "aliases": [ 00:04:15.633 "58550e9e-1222-5615-a125-abb742b7a1b7" 00:04:15.633 ], 00:04:15.633 "product_name": "passthru", 00:04:15.633 "block_size": 512, 00:04:15.633 "num_blocks": 16384, 00:04:15.633 "uuid": "58550e9e-1222-5615-a125-abb742b7a1b7", 00:04:15.633 "assigned_rate_limits": { 00:04:15.633 "rw_ios_per_sec": 0, 00:04:15.633 "rw_mbytes_per_sec": 0, 00:04:15.633 "r_mbytes_per_sec": 0, 00:04:15.633 "w_mbytes_per_sec": 0 00:04:15.633 }, 00:04:15.633 "claimed": false, 00:04:15.633 "zoned": false, 00:04:15.633 "supported_io_types": { 00:04:15.633 "read": true, 00:04:15.633 "write": true, 00:04:15.633 "unmap": true, 00:04:15.633 "flush": true, 00:04:15.633 "reset": true, 00:04:15.633 "nvme_admin": false, 00:04:15.633 "nvme_io": false, 00:04:15.633 "nvme_io_md": false, 00:04:15.633 "write_zeroes": true, 00:04:15.633 "zcopy": true, 00:04:15.633 "get_zone_info": false, 00:04:15.633 "zone_management": false, 00:04:15.633 "zone_append": false, 00:04:15.633 "compare": false, 00:04:15.633 "compare_and_write": false, 00:04:15.633 "abort": true, 00:04:15.633 "seek_hole": false, 00:04:15.633 "seek_data": false, 00:04:15.633 "copy": true, 00:04:15.633 "nvme_iov_md": false 00:04:15.633 }, 00:04:15.633 "memory_domains": [ 00:04:15.633 { 00:04:15.633 "dma_device_id": "system", 00:04:15.633 "dma_device_type": 1 00:04:15.633 }, 00:04:15.633 { 00:04:15.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.633 "dma_device_type": 2 00:04:15.633 } 00:04:15.633 ], 00:04:15.633 "driver_specific": { 00:04:15.633 "passthru": { 00:04:15.633 "name": "Passthru0", 00:04:15.633 "base_bdev_name": "Malloc0" 00:04:15.633 } 00:04:15.633 } 00:04:15.633 } 00:04:15.633 ]' 00:04:15.633 09:23:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:15.633 09:23:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:15.633 09:23:04 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:15.633 09:23:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.633 09:23:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.633 09:23:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.633 09:23:04 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:15.633 09:23:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.633 09:23:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.892 09:23:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.892 09:23:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:15.892 09:23:04 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.892 09:23:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.892 09:23:04 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.892 09:23:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:15.892 09:23:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:15.892 09:23:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:15.892 00:04:15.892 real 0m0.348s 00:04:15.892 user 0m0.193s 00:04:15.892 sys 0m0.046s 00:04:15.892 09:23:04 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:15.892 09:23:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.892 ************************************ 00:04:15.892 END TEST rpc_integrity 00:04:15.892 ************************************ 00:04:15.892 09:23:04 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:15.892 09:23:04 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:15.892 09:23:04 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:15.892 09:23:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.892 ************************************ 00:04:15.892 START TEST rpc_plugins 00:04:15.892 ************************************ 00:04:15.892 09:23:04 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:04:15.892 09:23:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:15.892 09:23:04 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.892 09:23:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.892 09:23:04 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.892 09:23:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:15.892 09:23:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:15.892 09:23:04 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.892 09:23:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.892 09:23:04 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.892 09:23:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:15.892 { 00:04:15.892 "name": "Malloc1", 00:04:15.892 "aliases": [ 00:04:15.892 "6036acb9-fdab-4fea-b5bb-f2c500c910c6" 00:04:15.892 ], 00:04:15.892 "product_name": "Malloc disk", 00:04:15.892 "block_size": 4096, 00:04:15.892 "num_blocks": 256, 00:04:15.892 "uuid": "6036acb9-fdab-4fea-b5bb-f2c500c910c6", 00:04:15.892 "assigned_rate_limits": { 00:04:15.892 "rw_ios_per_sec": 0, 00:04:15.892 "rw_mbytes_per_sec": 0, 00:04:15.892 "r_mbytes_per_sec": 0, 00:04:15.892 "w_mbytes_per_sec": 0 00:04:15.892 }, 00:04:15.892 "claimed": false, 00:04:15.892 "zoned": false, 00:04:15.892 "supported_io_types": { 00:04:15.892 "read": true, 00:04:15.892 "write": true, 00:04:15.893 "unmap": true, 00:04:15.893 "flush": true, 00:04:15.893 "reset": true, 00:04:15.893 "nvme_admin": false, 00:04:15.893 "nvme_io": false, 00:04:15.893 "nvme_io_md": false, 00:04:15.893 "write_zeroes": true, 00:04:15.893 "zcopy": true, 00:04:15.893 "get_zone_info": false, 00:04:15.893 "zone_management": false, 00:04:15.893 "zone_append": false, 00:04:15.893 "compare": false, 00:04:15.893 "compare_and_write": false, 00:04:15.893 "abort": true, 00:04:15.893 "seek_hole": false, 00:04:15.893 "seek_data": false, 00:04:15.893 "copy": true, 00:04:15.893 "nvme_iov_md": false 00:04:15.893 }, 00:04:15.893 "memory_domains": [ 00:04:15.893 { 00:04:15.893 "dma_device_id": "system", 00:04:15.893 "dma_device_type": 1 00:04:15.893 }, 00:04:15.893 { 00:04:15.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.893 "dma_device_type": 2 00:04:15.893 } 00:04:15.893 ], 00:04:15.893 "driver_specific": {} 00:04:15.893 } 00:04:15.893 ]' 00:04:15.893 09:23:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:15.893 09:23:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:15.893 09:23:04 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:15.893 09:23:04 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.893 09:23:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.893 09:23:04 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.893 09:23:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:15.893 09:23:04 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.893 09:23:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.893 09:23:04 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.893 09:23:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:15.893 09:23:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:16.151 09:23:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:16.151 00:04:16.151 real 0m0.166s 00:04:16.151 user 0m0.100s 00:04:16.151 sys 0m0.024s 00:04:16.151 09:23:04 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:16.151 09:23:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.151 ************************************ 00:04:16.151 END TEST rpc_plugins 00:04:16.151 ************************************ 00:04:16.151 09:23:04 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:16.151 09:23:04 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:16.151 09:23:04 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:16.151 09:23:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.151 ************************************ 00:04:16.151 START TEST rpc_trace_cmd_test 00:04:16.151 ************************************ 00:04:16.151 09:23:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:04:16.151 09:23:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:16.151 09:23:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:16.151 09:23:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.151 09:23:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:16.151 09:23:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.151 09:23:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:16.151 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56975", 00:04:16.151 "tpoint_group_mask": "0x8", 00:04:16.151 "iscsi_conn": { 00:04:16.151 "mask": "0x2", 00:04:16.151 "tpoint_mask": "0x0" 00:04:16.151 }, 00:04:16.151 "scsi": { 00:04:16.151 "mask": "0x4", 00:04:16.151 "tpoint_mask": "0x0" 00:04:16.151 }, 00:04:16.151 "bdev": { 00:04:16.151 "mask": "0x8", 00:04:16.151 "tpoint_mask": "0xffffffffffffffff" 00:04:16.151 }, 00:04:16.151 "nvmf_rdma": { 00:04:16.151 "mask": "0x10", 00:04:16.151 "tpoint_mask": "0x0" 00:04:16.151 }, 00:04:16.151 "nvmf_tcp": { 00:04:16.151 "mask": "0x20", 00:04:16.151 "tpoint_mask": "0x0" 00:04:16.151 }, 00:04:16.151 "ftl": { 00:04:16.151 "mask": "0x40", 00:04:16.151 "tpoint_mask": "0x0" 00:04:16.151 }, 00:04:16.151 "blobfs": { 00:04:16.151 "mask": "0x80", 00:04:16.151 "tpoint_mask": "0x0" 00:04:16.151 }, 00:04:16.151 "dsa": { 00:04:16.151 "mask": "0x200", 00:04:16.151 "tpoint_mask": "0x0" 00:04:16.151 }, 00:04:16.151 "thread": { 00:04:16.151 "mask": "0x400", 00:04:16.151 "tpoint_mask": "0x0" 00:04:16.151 }, 00:04:16.151 "nvme_pcie": { 00:04:16.151 "mask": "0x800", 00:04:16.151 "tpoint_mask": "0x0" 00:04:16.151 }, 00:04:16.151 "iaa": { 00:04:16.151 "mask": "0x1000", 00:04:16.151 "tpoint_mask": "0x0" 00:04:16.151 }, 00:04:16.151 "nvme_tcp": { 00:04:16.151 "mask": "0x2000", 00:04:16.151 "tpoint_mask": "0x0" 00:04:16.151 }, 00:04:16.151 "bdev_nvme": { 00:04:16.151 "mask": "0x4000", 00:04:16.151 "tpoint_mask": "0x0" 00:04:16.151 }, 00:04:16.151 "sock": { 00:04:16.152 "mask": "0x8000", 00:04:16.152 "tpoint_mask": "0x0" 00:04:16.152 }, 00:04:16.152 "blob": { 00:04:16.152 "mask": "0x10000", 00:04:16.152 "tpoint_mask": "0x0" 00:04:16.152 }, 00:04:16.152 "bdev_raid": { 00:04:16.152 "mask": "0x20000", 00:04:16.152 "tpoint_mask": "0x0" 00:04:16.152 }, 00:04:16.152 "scheduler": { 00:04:16.152 "mask": "0x40000", 00:04:16.152 "tpoint_mask": "0x0" 00:04:16.152 } 00:04:16.152 }' 00:04:16.152 09:23:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:16.152 09:23:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:16.152 09:23:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:16.152 09:23:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:16.152 09:23:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:16.152 09:23:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:16.152 09:23:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:16.408 09:23:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:16.408 09:23:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:16.408 09:23:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:16.408 00:04:16.408 real 0m0.236s 00:04:16.408 user 0m0.192s 00:04:16.408 sys 0m0.035s 00:04:16.408 09:23:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:16.408 09:23:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:16.408 ************************************ 00:04:16.408 END TEST rpc_trace_cmd_test 00:04:16.408 ************************************ 00:04:16.408 09:23:04 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:16.408 09:23:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:16.408 09:23:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:16.408 09:23:04 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:16.408 09:23:04 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:16.408 09:23:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.408 ************************************ 00:04:16.408 START TEST rpc_daemon_integrity 00:04:16.408 ************************************ 00:04:16.408 09:23:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:16.408 09:23:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:16.408 09:23:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.408 09:23:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.408 09:23:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.408 09:23:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:16.408 09:23:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:16.408 09:23:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:16.408 09:23:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:16.408 09:23:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.408 09:23:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.408 09:23:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.408 09:23:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:16.408 09:23:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:16.408 09:23:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.408 09:23:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.408 09:23:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.408 09:23:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:16.408 { 00:04:16.408 "name": "Malloc2", 00:04:16.408 "aliases": [ 00:04:16.408 "8a1f9f4a-d024-40ef-b1d4-be7ed4f2027f" 00:04:16.408 ], 00:04:16.408 "product_name": "Malloc disk", 00:04:16.408 "block_size": 512, 00:04:16.408 "num_blocks": 16384, 00:04:16.408 "uuid": "8a1f9f4a-d024-40ef-b1d4-be7ed4f2027f", 00:04:16.408 "assigned_rate_limits": { 00:04:16.408 "rw_ios_per_sec": 0, 00:04:16.409 "rw_mbytes_per_sec": 0, 00:04:16.409 "r_mbytes_per_sec": 0, 00:04:16.409 "w_mbytes_per_sec": 0 00:04:16.409 }, 00:04:16.409 "claimed": false, 00:04:16.409 "zoned": false, 00:04:16.409 "supported_io_types": { 00:04:16.409 "read": true, 00:04:16.409 "write": true, 00:04:16.409 "unmap": true, 00:04:16.409 "flush": true, 00:04:16.409 "reset": true, 00:04:16.409 "nvme_admin": false, 00:04:16.409 "nvme_io": false, 00:04:16.409 "nvme_io_md": false, 00:04:16.409 "write_zeroes": true, 00:04:16.409 "zcopy": true, 00:04:16.409 "get_zone_info": false, 00:04:16.409 "zone_management": false, 00:04:16.409 "zone_append": false, 00:04:16.409 "compare": false, 00:04:16.409 "compare_and_write": false, 00:04:16.409 "abort": true, 00:04:16.409 "seek_hole": false, 00:04:16.409 "seek_data": false, 00:04:16.409 "copy": true, 00:04:16.409 "nvme_iov_md": false 00:04:16.409 }, 00:04:16.409 "memory_domains": [ 00:04:16.409 { 00:04:16.409 "dma_device_id": "system", 00:04:16.409 "dma_device_type": 1 00:04:16.409 }, 00:04:16.409 { 00:04:16.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.409 "dma_device_type": 2 00:04:16.409 } 00:04:16.409 ], 00:04:16.409 "driver_specific": {} 00:04:16.409 } 00:04:16.409 ]' 00:04:16.409 09:23:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:16.666 09:23:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:16.666 09:23:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:16.666 09:23:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.666 09:23:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.666 [2024-11-15 09:23:04.900297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:16.667 [2024-11-15 09:23:04.900420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:16.667 [2024-11-15 09:23:04.900453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:16.667 [2024-11-15 09:23:04.900468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:16.667 [2024-11-15 09:23:04.903819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:16.667 [2024-11-15 09:23:04.903921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:16.667 Passthru0 00:04:16.667 09:23:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.667 09:23:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:16.667 09:23:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.667 09:23:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.667 09:23:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.667 09:23:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:16.667 { 00:04:16.667 "name": "Malloc2", 00:04:16.667 "aliases": [ 00:04:16.667 "8a1f9f4a-d024-40ef-b1d4-be7ed4f2027f" 00:04:16.667 ], 00:04:16.667 "product_name": "Malloc disk", 00:04:16.667 "block_size": 512, 00:04:16.667 "num_blocks": 16384, 00:04:16.667 "uuid": "8a1f9f4a-d024-40ef-b1d4-be7ed4f2027f", 00:04:16.667 "assigned_rate_limits": { 00:04:16.667 "rw_ios_per_sec": 0, 00:04:16.667 "rw_mbytes_per_sec": 0, 00:04:16.667 "r_mbytes_per_sec": 0, 00:04:16.667 "w_mbytes_per_sec": 0 00:04:16.667 }, 00:04:16.667 "claimed": true, 00:04:16.667 "claim_type": "exclusive_write", 00:04:16.667 "zoned": false, 00:04:16.667 "supported_io_types": { 00:04:16.667 "read": true, 00:04:16.667 "write": true, 00:04:16.667 "unmap": true, 00:04:16.667 "flush": true, 00:04:16.667 "reset": true, 00:04:16.667 "nvme_admin": false, 00:04:16.667 "nvme_io": false, 00:04:16.667 "nvme_io_md": false, 00:04:16.667 "write_zeroes": true, 00:04:16.667 "zcopy": true, 00:04:16.667 "get_zone_info": false, 00:04:16.667 "zone_management": false, 00:04:16.667 "zone_append": false, 00:04:16.667 "compare": false, 00:04:16.667 "compare_and_write": false, 00:04:16.667 "abort": true, 00:04:16.667 "seek_hole": false, 00:04:16.667 "seek_data": false, 00:04:16.667 "copy": true, 00:04:16.667 "nvme_iov_md": false 00:04:16.667 }, 00:04:16.667 "memory_domains": [ 00:04:16.667 { 00:04:16.667 "dma_device_id": "system", 00:04:16.667 "dma_device_type": 1 00:04:16.667 }, 00:04:16.667 { 00:04:16.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.667 "dma_device_type": 2 00:04:16.667 } 00:04:16.667 ], 00:04:16.667 "driver_specific": {} 00:04:16.667 }, 00:04:16.667 { 00:04:16.667 "name": "Passthru0", 00:04:16.667 "aliases": [ 00:04:16.667 "b557e298-d934-56e0-9401-d1c663de1125" 00:04:16.667 ], 00:04:16.667 "product_name": "passthru", 00:04:16.667 "block_size": 512, 00:04:16.667 "num_blocks": 16384, 00:04:16.667 "uuid": "b557e298-d934-56e0-9401-d1c663de1125", 00:04:16.667 "assigned_rate_limits": { 00:04:16.667 "rw_ios_per_sec": 0, 00:04:16.667 "rw_mbytes_per_sec": 0, 00:04:16.667 "r_mbytes_per_sec": 0, 00:04:16.667 "w_mbytes_per_sec": 0 00:04:16.667 }, 00:04:16.667 "claimed": false, 00:04:16.667 "zoned": false, 00:04:16.667 "supported_io_types": { 00:04:16.667 "read": true, 00:04:16.667 "write": true, 00:04:16.667 "unmap": true, 00:04:16.667 "flush": true, 00:04:16.667 "reset": true, 00:04:16.667 "nvme_admin": false, 00:04:16.667 "nvme_io": false, 00:04:16.667 "nvme_io_md": false, 00:04:16.667 "write_zeroes": true, 00:04:16.667 "zcopy": true, 00:04:16.667 "get_zone_info": false, 00:04:16.667 "zone_management": false, 00:04:16.667 "zone_append": false, 00:04:16.667 "compare": false, 00:04:16.667 "compare_and_write": false, 00:04:16.667 "abort": true, 00:04:16.667 "seek_hole": false, 00:04:16.667 "seek_data": false, 00:04:16.667 "copy": true, 00:04:16.667 "nvme_iov_md": false 00:04:16.667 }, 00:04:16.667 "memory_domains": [ 00:04:16.667 { 00:04:16.667 "dma_device_id": "system", 00:04:16.667 "dma_device_type": 1 00:04:16.667 }, 00:04:16.667 { 00:04:16.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.667 "dma_device_type": 2 00:04:16.667 } 00:04:16.667 ], 00:04:16.667 "driver_specific": { 00:04:16.667 "passthru": { 00:04:16.667 "name": "Passthru0", 00:04:16.667 "base_bdev_name": "Malloc2" 00:04:16.667 } 00:04:16.667 } 00:04:16.667 } 00:04:16.667 ]' 00:04:16.667 09:23:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:16.667 09:23:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:16.667 09:23:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:16.667 09:23:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.667 09:23:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.667 09:23:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.667 09:23:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:16.667 09:23:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.667 09:23:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.667 09:23:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.667 09:23:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:16.667 09:23:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.667 09:23:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.667 09:23:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.667 09:23:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:16.667 09:23:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:16.667 ************************************ 00:04:16.667 END TEST rpc_daemon_integrity 00:04:16.667 ************************************ 00:04:16.667 09:23:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:16.667 00:04:16.667 real 0m0.372s 00:04:16.667 user 0m0.199s 00:04:16.667 sys 0m0.048s 00:04:16.667 09:23:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:16.667 09:23:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.926 09:23:05 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:16.926 09:23:05 rpc -- rpc/rpc.sh@84 -- # killprocess 56975 00:04:16.926 09:23:05 rpc -- common/autotest_common.sh@952 -- # '[' -z 56975 ']' 00:04:16.926 09:23:05 rpc -- common/autotest_common.sh@956 -- # kill -0 56975 00:04:16.926 09:23:05 rpc -- common/autotest_common.sh@957 -- # uname 00:04:16.926 09:23:05 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:16.926 09:23:05 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56975 00:04:16.926 killing process with pid 56975 00:04:16.926 09:23:05 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:16.926 09:23:05 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:16.926 09:23:05 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56975' 00:04:16.926 09:23:05 rpc -- common/autotest_common.sh@971 -- # kill 56975 00:04:16.926 09:23:05 rpc -- common/autotest_common.sh@976 -- # wait 56975 00:04:20.243 00:04:20.243 real 0m6.524s 00:04:20.243 user 0m6.963s 00:04:20.244 sys 0m1.128s 00:04:20.244 ************************************ 00:04:20.244 END TEST rpc 00:04:20.244 ************************************ 00:04:20.244 09:23:08 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:20.244 09:23:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.244 09:23:08 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:20.244 09:23:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:20.244 09:23:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:20.244 09:23:08 -- common/autotest_common.sh@10 -- # set +x 00:04:20.244 ************************************ 00:04:20.244 START TEST skip_rpc 00:04:20.244 ************************************ 00:04:20.244 09:23:08 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:20.244 * Looking for test storage... 00:04:20.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:20.244 09:23:08 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:20.244 09:23:08 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:20.244 09:23:08 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:20.244 09:23:08 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.244 09:23:08 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:20.244 09:23:08 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.244 09:23:08 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:20.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.244 --rc genhtml_branch_coverage=1 00:04:20.244 --rc genhtml_function_coverage=1 00:04:20.244 --rc genhtml_legend=1 00:04:20.244 --rc geninfo_all_blocks=1 00:04:20.244 --rc geninfo_unexecuted_blocks=1 00:04:20.244 00:04:20.244 ' 00:04:20.244 09:23:08 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:20.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.244 --rc genhtml_branch_coverage=1 00:04:20.244 --rc genhtml_function_coverage=1 00:04:20.244 --rc genhtml_legend=1 00:04:20.244 --rc geninfo_all_blocks=1 00:04:20.244 --rc geninfo_unexecuted_blocks=1 00:04:20.244 00:04:20.244 ' 00:04:20.244 09:23:08 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:20.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.244 --rc genhtml_branch_coverage=1 00:04:20.244 --rc genhtml_function_coverage=1 00:04:20.244 --rc genhtml_legend=1 00:04:20.244 --rc geninfo_all_blocks=1 00:04:20.244 --rc geninfo_unexecuted_blocks=1 00:04:20.244 00:04:20.244 ' 00:04:20.244 09:23:08 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:20.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.244 --rc genhtml_branch_coverage=1 00:04:20.244 --rc genhtml_function_coverage=1 00:04:20.244 --rc genhtml_legend=1 00:04:20.244 --rc geninfo_all_blocks=1 00:04:20.244 --rc geninfo_unexecuted_blocks=1 00:04:20.244 00:04:20.244 ' 00:04:20.244 09:23:08 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:20.244 09:23:08 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:20.244 09:23:08 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:20.244 09:23:08 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:20.244 09:23:08 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:20.244 09:23:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.244 ************************************ 00:04:20.244 START TEST skip_rpc 00:04:20.244 ************************************ 00:04:20.244 09:23:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:04:20.244 09:23:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57215 00:04:20.244 09:23:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:20.244 09:23:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.244 09:23:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:20.503 [2024-11-15 09:23:08.807604] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:04:20.503 [2024-11-15 09:23:08.808047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57215 ] 00:04:20.761 [2024-11-15 09:23:09.000073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.761 [2024-11-15 09:23:09.221650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.060 09:23:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:26.060 09:23:13 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:26.060 09:23:13 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:26.060 09:23:13 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:26.060 09:23:13 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:26.060 09:23:13 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:26.060 09:23:13 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:26.060 09:23:13 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:26.060 09:23:13 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:26.060 09:23:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.060 09:23:13 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:26.060 09:23:13 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:26.060 09:23:13 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:26.060 09:23:13 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:26.060 09:23:13 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:26.060 09:23:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:26.060 09:23:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57215 00:04:26.060 09:23:13 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 57215 ']' 00:04:26.060 09:23:13 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 57215 00:04:26.060 09:23:13 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:26.060 09:23:13 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:26.060 09:23:13 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57215 00:04:26.060 killing process with pid 57215 00:04:26.060 09:23:13 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:26.061 09:23:13 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:26.061 09:23:13 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57215' 00:04:26.061 09:23:13 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 57215 00:04:26.061 09:23:13 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 57215 00:04:28.594 ************************************ 00:04:28.594 END TEST skip_rpc 00:04:28.594 ************************************ 00:04:28.594 00:04:28.594 real 0m8.239s 00:04:28.594 user 0m7.483s 00:04:28.594 sys 0m0.636s 00:04:28.595 09:23:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:28.595 09:23:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.595 09:23:16 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:28.595 09:23:16 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:28.595 09:23:16 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:28.595 09:23:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.595 ************************************ 00:04:28.595 START TEST skip_rpc_with_json 00:04:28.595 ************************************ 00:04:28.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.595 09:23:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:28.595 09:23:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:28.595 09:23:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57330 00:04:28.595 09:23:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:28.595 09:23:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57330 00:04:28.595 09:23:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57330 ']' 00:04:28.595 09:23:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.595 09:23:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:28.595 09:23:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.595 09:23:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:28.595 09:23:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.595 09:23:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:28.854 [2024-11-15 09:23:17.074173] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:04:28.854 [2024-11-15 09:23:17.074350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57330 ] 00:04:28.854 [2024-11-15 09:23:17.264550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.113 [2024-11-15 09:23:17.432111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.493 09:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:30.493 09:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:30.493 09:23:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:30.493 09:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.493 09:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.493 [2024-11-15 09:23:18.603966] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:30.493 request: 00:04:30.493 { 00:04:30.493 "trtype": "tcp", 00:04:30.493 "method": "nvmf_get_transports", 00:04:30.493 "req_id": 1 00:04:30.493 } 00:04:30.493 Got JSON-RPC error response 00:04:30.493 response: 00:04:30.493 { 00:04:30.493 "code": -19, 00:04:30.493 "message": "No such device" 00:04:30.493 } 00:04:30.493 09:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:30.493 09:23:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:30.493 09:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.493 09:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.493 [2024-11-15 09:23:18.616183] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.493 09:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.493 09:23:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:30.493 09:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.493 09:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.493 09:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.493 09:23:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:30.493 { 00:04:30.493 "subsystems": [ 00:04:30.493 { 00:04:30.493 "subsystem": "fsdev", 00:04:30.493 "config": [ 00:04:30.493 { 00:04:30.493 "method": "fsdev_set_opts", 00:04:30.493 "params": { 00:04:30.493 "fsdev_io_pool_size": 65535, 00:04:30.493 "fsdev_io_cache_size": 256 00:04:30.493 } 00:04:30.493 } 00:04:30.493 ] 00:04:30.493 }, 00:04:30.493 { 00:04:30.493 "subsystem": "keyring", 00:04:30.493 "config": [] 00:04:30.493 }, 00:04:30.493 { 00:04:30.493 "subsystem": "iobuf", 00:04:30.493 "config": [ 00:04:30.493 { 00:04:30.493 "method": "iobuf_set_options", 00:04:30.493 "params": { 00:04:30.493 "small_pool_count": 8192, 00:04:30.493 "large_pool_count": 1024, 00:04:30.493 "small_bufsize": 8192, 00:04:30.493 "large_bufsize": 135168, 00:04:30.493 "enable_numa": false 00:04:30.493 } 00:04:30.493 } 00:04:30.493 ] 00:04:30.493 }, 00:04:30.493 { 00:04:30.493 "subsystem": "sock", 00:04:30.493 "config": [ 00:04:30.493 { 00:04:30.493 "method": "sock_set_default_impl", 00:04:30.493 "params": { 00:04:30.493 "impl_name": "posix" 00:04:30.493 } 00:04:30.493 }, 00:04:30.493 { 00:04:30.493 "method": "sock_impl_set_options", 00:04:30.493 "params": { 00:04:30.493 "impl_name": "ssl", 00:04:30.493 "recv_buf_size": 4096, 00:04:30.493 "send_buf_size": 4096, 00:04:30.493 "enable_recv_pipe": true, 00:04:30.493 "enable_quickack": false, 00:04:30.493 "enable_placement_id": 0, 00:04:30.493 "enable_zerocopy_send_server": true, 00:04:30.493 "enable_zerocopy_send_client": false, 00:04:30.493 "zerocopy_threshold": 0, 00:04:30.493 "tls_version": 0, 00:04:30.493 "enable_ktls": false 00:04:30.493 } 00:04:30.493 }, 00:04:30.493 { 00:04:30.493 "method": "sock_impl_set_options", 00:04:30.493 "params": { 00:04:30.493 "impl_name": "posix", 00:04:30.493 "recv_buf_size": 2097152, 00:04:30.493 "send_buf_size": 2097152, 00:04:30.493 "enable_recv_pipe": true, 00:04:30.493 "enable_quickack": false, 00:04:30.493 "enable_placement_id": 0, 00:04:30.493 "enable_zerocopy_send_server": true, 00:04:30.493 "enable_zerocopy_send_client": false, 00:04:30.493 "zerocopy_threshold": 0, 00:04:30.493 "tls_version": 0, 00:04:30.493 "enable_ktls": false 00:04:30.493 } 00:04:30.493 } 00:04:30.493 ] 00:04:30.493 }, 00:04:30.493 { 00:04:30.493 "subsystem": "vmd", 00:04:30.493 "config": [] 00:04:30.493 }, 00:04:30.493 { 00:04:30.493 "subsystem": "accel", 00:04:30.493 "config": [ 00:04:30.493 { 00:04:30.493 "method": "accel_set_options", 00:04:30.493 "params": { 00:04:30.493 "small_cache_size": 128, 00:04:30.493 "large_cache_size": 16, 00:04:30.493 "task_count": 2048, 00:04:30.493 "sequence_count": 2048, 00:04:30.493 "buf_count": 2048 00:04:30.493 } 00:04:30.493 } 00:04:30.493 ] 00:04:30.493 }, 00:04:30.493 { 00:04:30.493 "subsystem": "bdev", 00:04:30.493 "config": [ 00:04:30.493 { 00:04:30.493 "method": "bdev_set_options", 00:04:30.493 "params": { 00:04:30.493 "bdev_io_pool_size": 65535, 00:04:30.493 "bdev_io_cache_size": 256, 00:04:30.493 "bdev_auto_examine": true, 00:04:30.493 "iobuf_small_cache_size": 128, 00:04:30.493 "iobuf_large_cache_size": 16 00:04:30.493 } 00:04:30.493 }, 00:04:30.493 { 00:04:30.493 "method": "bdev_raid_set_options", 00:04:30.493 "params": { 00:04:30.493 "process_window_size_kb": 1024, 00:04:30.493 "process_max_bandwidth_mb_sec": 0 00:04:30.493 } 00:04:30.493 }, 00:04:30.493 { 00:04:30.493 "method": "bdev_iscsi_set_options", 00:04:30.493 "params": { 00:04:30.493 "timeout_sec": 30 00:04:30.493 } 00:04:30.493 }, 00:04:30.493 { 00:04:30.493 "method": "bdev_nvme_set_options", 00:04:30.493 "params": { 00:04:30.493 "action_on_timeout": "none", 00:04:30.493 "timeout_us": 0, 00:04:30.493 "timeout_admin_us": 0, 00:04:30.493 "keep_alive_timeout_ms": 10000, 00:04:30.493 "arbitration_burst": 0, 00:04:30.493 "low_priority_weight": 0, 00:04:30.493 "medium_priority_weight": 0, 00:04:30.493 "high_priority_weight": 0, 00:04:30.493 "nvme_adminq_poll_period_us": 10000, 00:04:30.493 "nvme_ioq_poll_period_us": 0, 00:04:30.493 "io_queue_requests": 0, 00:04:30.493 "delay_cmd_submit": true, 00:04:30.493 "transport_retry_count": 4, 00:04:30.493 "bdev_retry_count": 3, 00:04:30.493 "transport_ack_timeout": 0, 00:04:30.493 "ctrlr_loss_timeout_sec": 0, 00:04:30.493 "reconnect_delay_sec": 0, 00:04:30.493 "fast_io_fail_timeout_sec": 0, 00:04:30.493 "disable_auto_failback": false, 00:04:30.493 "generate_uuids": false, 00:04:30.493 "transport_tos": 0, 00:04:30.493 "nvme_error_stat": false, 00:04:30.493 "rdma_srq_size": 0, 00:04:30.493 "io_path_stat": false, 00:04:30.493 "allow_accel_sequence": false, 00:04:30.493 "rdma_max_cq_size": 0, 00:04:30.493 "rdma_cm_event_timeout_ms": 0, 00:04:30.493 "dhchap_digests": [ 00:04:30.493 "sha256", 00:04:30.493 "sha384", 00:04:30.493 "sha512" 00:04:30.493 ], 00:04:30.493 "dhchap_dhgroups": [ 00:04:30.493 "null", 00:04:30.493 "ffdhe2048", 00:04:30.493 "ffdhe3072", 00:04:30.493 "ffdhe4096", 00:04:30.493 "ffdhe6144", 00:04:30.493 "ffdhe8192" 00:04:30.493 ] 00:04:30.493 } 00:04:30.493 }, 00:04:30.493 { 00:04:30.493 "method": "bdev_nvme_set_hotplug", 00:04:30.493 "params": { 00:04:30.493 "period_us": 100000, 00:04:30.493 "enable": false 00:04:30.493 } 00:04:30.493 }, 00:04:30.493 { 00:04:30.493 "method": "bdev_wait_for_examine" 00:04:30.493 } 00:04:30.493 ] 00:04:30.493 }, 00:04:30.493 { 00:04:30.493 "subsystem": "scsi", 00:04:30.493 "config": null 00:04:30.493 }, 00:04:30.493 { 00:04:30.493 "subsystem": "scheduler", 00:04:30.493 "config": [ 00:04:30.493 { 00:04:30.493 "method": "framework_set_scheduler", 00:04:30.493 "params": { 00:04:30.493 "name": "static" 00:04:30.493 } 00:04:30.493 } 00:04:30.493 ] 00:04:30.493 }, 00:04:30.493 { 00:04:30.493 "subsystem": "vhost_scsi", 00:04:30.493 "config": [] 00:04:30.493 }, 00:04:30.493 { 00:04:30.493 "subsystem": "vhost_blk", 00:04:30.494 "config": [] 00:04:30.494 }, 00:04:30.494 { 00:04:30.494 "subsystem": "ublk", 00:04:30.494 "config": [] 00:04:30.494 }, 00:04:30.494 { 00:04:30.494 "subsystem": "nbd", 00:04:30.494 "config": [] 00:04:30.494 }, 00:04:30.494 { 00:04:30.494 "subsystem": "nvmf", 00:04:30.494 "config": [ 00:04:30.494 { 00:04:30.494 "method": "nvmf_set_config", 00:04:30.494 "params": { 00:04:30.494 "discovery_filter": "match_any", 00:04:30.494 "admin_cmd_passthru": { 00:04:30.494 "identify_ctrlr": false 00:04:30.494 }, 00:04:30.494 "dhchap_digests": [ 00:04:30.494 "sha256", 00:04:30.494 "sha384", 00:04:30.494 "sha512" 00:04:30.494 ], 00:04:30.494 "dhchap_dhgroups": [ 00:04:30.494 "null", 00:04:30.494 "ffdhe2048", 00:04:30.494 "ffdhe3072", 00:04:30.494 "ffdhe4096", 00:04:30.494 "ffdhe6144", 00:04:30.494 "ffdhe8192" 00:04:30.494 ] 00:04:30.494 } 00:04:30.494 }, 00:04:30.494 { 00:04:30.494 "method": "nvmf_set_max_subsystems", 00:04:30.494 "params": { 00:04:30.494 "max_subsystems": 1024 00:04:30.494 } 00:04:30.494 }, 00:04:30.494 { 00:04:30.494 "method": "nvmf_set_crdt", 00:04:30.494 "params": { 00:04:30.494 "crdt1": 0, 00:04:30.494 "crdt2": 0, 00:04:30.494 "crdt3": 0 00:04:30.494 } 00:04:30.494 }, 00:04:30.494 { 00:04:30.494 "method": "nvmf_create_transport", 00:04:30.494 "params": { 00:04:30.494 "trtype": "TCP", 00:04:30.494 "max_queue_depth": 128, 00:04:30.494 "max_io_qpairs_per_ctrlr": 127, 00:04:30.494 "in_capsule_data_size": 4096, 00:04:30.494 "max_io_size": 131072, 00:04:30.494 "io_unit_size": 131072, 00:04:30.494 "max_aq_depth": 128, 00:04:30.494 "num_shared_buffers": 511, 00:04:30.494 "buf_cache_size": 4294967295, 00:04:30.494 "dif_insert_or_strip": false, 00:04:30.494 "zcopy": false, 00:04:30.494 "c2h_success": true, 00:04:30.494 "sock_priority": 0, 00:04:30.494 "abort_timeout_sec": 1, 00:04:30.494 "ack_timeout": 0, 00:04:30.494 "data_wr_pool_size": 0 00:04:30.494 } 00:04:30.494 } 00:04:30.494 ] 00:04:30.494 }, 00:04:30.494 { 00:04:30.494 "subsystem": "iscsi", 00:04:30.494 "config": [ 00:04:30.494 { 00:04:30.494 "method": "iscsi_set_options", 00:04:30.494 "params": { 00:04:30.494 "node_base": "iqn.2016-06.io.spdk", 00:04:30.494 "max_sessions": 128, 00:04:30.494 "max_connections_per_session": 2, 00:04:30.494 "max_queue_depth": 64, 00:04:30.494 "default_time2wait": 2, 00:04:30.494 "default_time2retain": 20, 00:04:30.494 "first_burst_length": 8192, 00:04:30.494 "immediate_data": true, 00:04:30.494 "allow_duplicated_isid": false, 00:04:30.494 "error_recovery_level": 0, 00:04:30.494 "nop_timeout": 60, 00:04:30.494 "nop_in_interval": 30, 00:04:30.494 "disable_chap": false, 00:04:30.494 "require_chap": false, 00:04:30.494 "mutual_chap": false, 00:04:30.494 "chap_group": 0, 00:04:30.494 "max_large_datain_per_connection": 64, 00:04:30.494 "max_r2t_per_connection": 4, 00:04:30.494 "pdu_pool_size": 36864, 00:04:30.494 "immediate_data_pool_size": 16384, 00:04:30.494 "data_out_pool_size": 2048 00:04:30.494 } 00:04:30.494 } 00:04:30.494 ] 00:04:30.494 } 00:04:30.494 ] 00:04:30.494 } 00:04:30.494 09:23:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:30.494 09:23:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57330 00:04:30.494 09:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57330 ']' 00:04:30.494 09:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57330 00:04:30.494 09:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:30.494 09:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:30.494 09:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57330 00:04:30.494 09:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:30.494 09:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:30.494 09:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57330' 00:04:30.494 killing process with pid 57330 00:04:30.494 09:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57330 00:04:30.494 09:23:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57330 00:04:33.792 09:23:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57397 00:04:33.792 09:23:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:33.792 09:23:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:39.069 09:23:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57397 00:04:39.069 09:23:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57397 ']' 00:04:39.069 09:23:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57397 00:04:39.069 09:23:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:39.069 09:23:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:39.069 09:23:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57397 00:04:39.069 killing process with pid 57397 00:04:39.069 09:23:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:39.069 09:23:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:39.069 09:23:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57397' 00:04:39.069 09:23:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57397 00:04:39.069 09:23:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57397 00:04:41.607 09:23:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:41.607 09:23:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:41.607 00:04:41.607 real 0m13.030s 00:04:41.607 user 0m12.108s 00:04:41.607 sys 0m1.224s 00:04:41.607 ************************************ 00:04:41.607 END TEST skip_rpc_with_json 00:04:41.607 ************************************ 00:04:41.607 09:23:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:41.607 09:23:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.607 09:23:30 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:41.607 09:23:30 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:41.607 09:23:30 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:41.607 09:23:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.607 ************************************ 00:04:41.607 START TEST skip_rpc_with_delay 00:04:41.607 ************************************ 00:04:41.607 09:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:41.607 09:23:30 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:41.607 09:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:41.607 09:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:41.607 09:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.607 09:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.607 09:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.607 09:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.607 09:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.607 09:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.607 09:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.607 09:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:41.607 09:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:41.866 [2024-11-15 09:23:30.180184] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:41.866 ************************************ 00:04:41.866 END TEST skip_rpc_with_delay 00:04:41.866 ************************************ 00:04:41.866 09:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:41.866 09:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:41.866 09:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:41.866 09:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:41.866 00:04:41.866 real 0m0.183s 00:04:41.866 user 0m0.090s 00:04:41.866 sys 0m0.089s 00:04:41.866 09:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:41.866 09:23:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:41.866 09:23:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:41.866 09:23:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:41.866 09:23:30 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:41.866 09:23:30 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:41.866 09:23:30 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:41.866 09:23:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.866 ************************************ 00:04:41.866 START TEST exit_on_failed_rpc_init 00:04:41.866 ************************************ 00:04:41.866 09:23:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:41.866 09:23:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57536 00:04:41.866 09:23:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.866 09:23:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57536 00:04:41.866 09:23:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57536 ']' 00:04:41.866 09:23:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.866 09:23:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:41.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.866 09:23:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.866 09:23:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:41.866 09:23:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:42.125 [2024-11-15 09:23:30.432239] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:04:42.125 [2024-11-15 09:23:30.432403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57536 ] 00:04:42.383 [2024-11-15 09:23:30.614044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.383 [2024-11-15 09:23:30.770580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.762 09:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:43.762 09:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:43.762 09:23:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.762 09:23:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.762 09:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:43.763 09:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.763 09:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.763 09:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.763 09:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.763 09:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.763 09:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.763 09:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.763 09:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.763 09:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:43.763 09:23:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.763 [2024-11-15 09:23:32.001889] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:04:43.763 [2024-11-15 09:23:32.002174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57560 ] 00:04:43.763 [2024-11-15 09:23:32.186246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.023 [2024-11-15 09:23:32.340042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.023 [2024-11-15 09:23:32.340195] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:44.023 [2024-11-15 09:23:32.340213] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:44.023 [2024-11-15 09:23:32.340230] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:44.281 09:23:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:44.281 09:23:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:44.281 09:23:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:44.281 09:23:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:44.281 09:23:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:44.281 09:23:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:44.281 09:23:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:44.281 09:23:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57536 00:04:44.281 09:23:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57536 ']' 00:04:44.281 09:23:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57536 00:04:44.281 09:23:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:44.281 09:23:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:44.281 09:23:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57536 00:04:44.281 09:23:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:44.281 09:23:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:44.281 09:23:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57536' 00:04:44.281 killing process with pid 57536 00:04:44.281 09:23:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57536 00:04:44.281 09:23:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57536 00:04:47.573 00:04:47.573 real 0m5.138s 00:04:47.573 user 0m5.425s 00:04:47.573 sys 0m0.790s 00:04:47.573 09:23:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:47.573 09:23:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:47.573 ************************************ 00:04:47.573 END TEST exit_on_failed_rpc_init 00:04:47.573 ************************************ 00:04:47.573 09:23:35 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:47.573 00:04:47.573 real 0m27.072s 00:04:47.573 user 0m25.308s 00:04:47.573 sys 0m3.037s 00:04:47.573 09:23:35 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:47.573 09:23:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.573 ************************************ 00:04:47.573 END TEST skip_rpc 00:04:47.573 ************************************ 00:04:47.573 09:23:35 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:47.573 09:23:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:47.573 09:23:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:47.573 09:23:35 -- common/autotest_common.sh@10 -- # set +x 00:04:47.573 ************************************ 00:04:47.573 START TEST rpc_client 00:04:47.573 ************************************ 00:04:47.573 09:23:35 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:47.573 * Looking for test storage... 00:04:47.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:47.573 09:23:35 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:47.573 09:23:35 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:47.573 09:23:35 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:47.574 09:23:35 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.574 09:23:35 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:47.574 09:23:35 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.574 09:23:35 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:47.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.574 --rc genhtml_branch_coverage=1 00:04:47.574 --rc genhtml_function_coverage=1 00:04:47.574 --rc genhtml_legend=1 00:04:47.574 --rc geninfo_all_blocks=1 00:04:47.574 --rc geninfo_unexecuted_blocks=1 00:04:47.574 00:04:47.574 ' 00:04:47.574 09:23:35 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:47.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.574 --rc genhtml_branch_coverage=1 00:04:47.574 --rc genhtml_function_coverage=1 00:04:47.574 --rc genhtml_legend=1 00:04:47.574 --rc geninfo_all_blocks=1 00:04:47.574 --rc geninfo_unexecuted_blocks=1 00:04:47.574 00:04:47.574 ' 00:04:47.574 09:23:35 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:47.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.574 --rc genhtml_branch_coverage=1 00:04:47.574 --rc genhtml_function_coverage=1 00:04:47.574 --rc genhtml_legend=1 00:04:47.574 --rc geninfo_all_blocks=1 00:04:47.574 --rc geninfo_unexecuted_blocks=1 00:04:47.574 00:04:47.574 ' 00:04:47.574 09:23:35 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:47.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.574 --rc genhtml_branch_coverage=1 00:04:47.574 --rc genhtml_function_coverage=1 00:04:47.574 --rc genhtml_legend=1 00:04:47.574 --rc geninfo_all_blocks=1 00:04:47.574 --rc geninfo_unexecuted_blocks=1 00:04:47.574 00:04:47.574 ' 00:04:47.574 09:23:35 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:47.574 OK 00:04:47.574 09:23:35 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:47.574 00:04:47.574 real 0m0.309s 00:04:47.574 user 0m0.186s 00:04:47.574 sys 0m0.135s 00:04:47.574 ************************************ 00:04:47.574 END TEST rpc_client 00:04:47.574 ************************************ 00:04:47.574 09:23:35 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:47.574 09:23:35 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:47.574 09:23:35 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:47.574 09:23:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:47.574 09:23:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:47.574 09:23:35 -- common/autotest_common.sh@10 -- # set +x 00:04:47.574 ************************************ 00:04:47.574 START TEST json_config 00:04:47.574 ************************************ 00:04:47.574 09:23:35 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:47.834 09:23:36 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:47.834 09:23:36 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:47.834 09:23:36 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:47.834 09:23:36 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:47.834 09:23:36 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.834 09:23:36 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.834 09:23:36 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.834 09:23:36 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.834 09:23:36 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.834 09:23:36 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.834 09:23:36 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.834 09:23:36 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.834 09:23:36 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.834 09:23:36 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.834 09:23:36 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.834 09:23:36 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:47.834 09:23:36 json_config -- scripts/common.sh@345 -- # : 1 00:04:47.835 09:23:36 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.835 09:23:36 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.835 09:23:36 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:47.835 09:23:36 json_config -- scripts/common.sh@353 -- # local d=1 00:04:47.835 09:23:36 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.835 09:23:36 json_config -- scripts/common.sh@355 -- # echo 1 00:04:47.835 09:23:36 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.835 09:23:36 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:47.835 09:23:36 json_config -- scripts/common.sh@353 -- # local d=2 00:04:47.835 09:23:36 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.835 09:23:36 json_config -- scripts/common.sh@355 -- # echo 2 00:04:47.835 09:23:36 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.835 09:23:36 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.835 09:23:36 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.835 09:23:36 json_config -- scripts/common.sh@368 -- # return 0 00:04:47.835 09:23:36 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.835 09:23:36 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:47.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.835 --rc genhtml_branch_coverage=1 00:04:47.835 --rc genhtml_function_coverage=1 00:04:47.835 --rc genhtml_legend=1 00:04:47.835 --rc geninfo_all_blocks=1 00:04:47.835 --rc geninfo_unexecuted_blocks=1 00:04:47.835 00:04:47.835 ' 00:04:47.835 09:23:36 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:47.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.835 --rc genhtml_branch_coverage=1 00:04:47.835 --rc genhtml_function_coverage=1 00:04:47.835 --rc genhtml_legend=1 00:04:47.835 --rc geninfo_all_blocks=1 00:04:47.835 --rc geninfo_unexecuted_blocks=1 00:04:47.835 00:04:47.835 ' 00:04:47.835 09:23:36 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:47.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.835 --rc genhtml_branch_coverage=1 00:04:47.835 --rc genhtml_function_coverage=1 00:04:47.835 --rc genhtml_legend=1 00:04:47.835 --rc geninfo_all_blocks=1 00:04:47.835 --rc geninfo_unexecuted_blocks=1 00:04:47.835 00:04:47.835 ' 00:04:47.835 09:23:36 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:47.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.835 --rc genhtml_branch_coverage=1 00:04:47.835 --rc genhtml_function_coverage=1 00:04:47.835 --rc genhtml_legend=1 00:04:47.835 --rc geninfo_all_blocks=1 00:04:47.835 --rc geninfo_unexecuted_blocks=1 00:04:47.835 00:04:47.835 ' 00:04:47.835 09:23:36 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19855559-90e4-4c97-8397-e2a0f4af42ac 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=19855559-90e4-4c97-8397-e2a0f4af42ac 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:47.835 09:23:36 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:47.835 09:23:36 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.835 09:23:36 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.835 09:23:36 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.835 09:23:36 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.835 09:23:36 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.835 09:23:36 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.835 09:23:36 json_config -- paths/export.sh@5 -- # export PATH 00:04:47.835 09:23:36 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@51 -- # : 0 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:47.835 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:47.835 09:23:36 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:47.835 09:23:36 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:47.835 09:23:36 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:47.835 09:23:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:47.835 09:23:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:47.835 09:23:36 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:47.835 09:23:36 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:47.835 WARNING: No tests are enabled so not running JSON configuration tests 00:04:47.835 09:23:36 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:47.835 00:04:47.835 real 0m0.241s 00:04:47.835 user 0m0.155s 00:04:47.835 sys 0m0.091s 00:04:47.835 09:23:36 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:47.835 09:23:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.835 ************************************ 00:04:47.835 END TEST json_config 00:04:47.835 ************************************ 00:04:47.835 09:23:36 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:47.835 09:23:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:47.835 09:23:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:47.835 09:23:36 -- common/autotest_common.sh@10 -- # set +x 00:04:47.835 ************************************ 00:04:47.835 START TEST json_config_extra_key 00:04:47.835 ************************************ 00:04:47.835 09:23:36 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:48.104 09:23:36 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:48.104 09:23:36 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:48.104 09:23:36 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:48.104 09:23:36 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.104 09:23:36 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:48.104 09:23:36 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.104 09:23:36 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:48.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.104 --rc genhtml_branch_coverage=1 00:04:48.104 --rc genhtml_function_coverage=1 00:04:48.104 --rc genhtml_legend=1 00:04:48.104 --rc geninfo_all_blocks=1 00:04:48.104 --rc geninfo_unexecuted_blocks=1 00:04:48.104 00:04:48.104 ' 00:04:48.104 09:23:36 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:48.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.104 --rc genhtml_branch_coverage=1 00:04:48.104 --rc genhtml_function_coverage=1 00:04:48.104 --rc genhtml_legend=1 00:04:48.104 --rc geninfo_all_blocks=1 00:04:48.104 --rc geninfo_unexecuted_blocks=1 00:04:48.104 00:04:48.104 ' 00:04:48.104 09:23:36 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:48.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.104 --rc genhtml_branch_coverage=1 00:04:48.104 --rc genhtml_function_coverage=1 00:04:48.104 --rc genhtml_legend=1 00:04:48.104 --rc geninfo_all_blocks=1 00:04:48.104 --rc geninfo_unexecuted_blocks=1 00:04:48.104 00:04:48.104 ' 00:04:48.104 09:23:36 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:48.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.104 --rc genhtml_branch_coverage=1 00:04:48.104 --rc genhtml_function_coverage=1 00:04:48.104 --rc genhtml_legend=1 00:04:48.104 --rc geninfo_all_blocks=1 00:04:48.104 --rc geninfo_unexecuted_blocks=1 00:04:48.104 00:04:48.104 ' 00:04:48.104 09:23:36 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:48.104 09:23:36 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:48.104 09:23:36 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:48.104 09:23:36 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:48.104 09:23:36 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:48.104 09:23:36 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:48.104 09:23:36 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:48.104 09:23:36 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:48.104 09:23:36 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:48.104 09:23:36 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:48.104 09:23:36 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:48.104 09:23:36 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:48.104 09:23:36 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19855559-90e4-4c97-8397-e2a0f4af42ac 00:04:48.104 09:23:36 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=19855559-90e4-4c97-8397-e2a0f4af42ac 00:04:48.105 09:23:36 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:48.105 09:23:36 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:48.105 09:23:36 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:48.105 09:23:36 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:48.105 09:23:36 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:48.105 09:23:36 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:48.105 09:23:36 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:48.105 09:23:36 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:48.105 09:23:36 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:48.105 09:23:36 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.105 09:23:36 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.105 09:23:36 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.105 09:23:36 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:48.105 09:23:36 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.105 09:23:36 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:48.105 09:23:36 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:48.105 09:23:36 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:48.105 09:23:36 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:48.105 09:23:36 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:48.105 09:23:36 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:48.105 09:23:36 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:48.105 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:48.105 09:23:36 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:48.105 09:23:36 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:48.105 09:23:36 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:48.105 09:23:36 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:48.105 09:23:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:48.105 09:23:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:48.105 09:23:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:48.105 09:23:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:48.105 09:23:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:48.105 09:23:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:48.105 09:23:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:48.105 09:23:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:48.105 09:23:36 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:48.105 09:23:36 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:48.105 INFO: launching applications... 00:04:48.105 09:23:36 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:48.105 09:23:36 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:48.105 09:23:36 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:48.105 09:23:36 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:48.105 09:23:36 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:48.105 09:23:36 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:48.105 09:23:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.105 09:23:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:48.105 09:23:36 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57775 00:04:48.105 09:23:36 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:48.105 Waiting for target to run... 00:04:48.105 09:23:36 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57775 /var/tmp/spdk_tgt.sock 00:04:48.105 09:23:36 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:48.105 09:23:36 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57775 ']' 00:04:48.105 09:23:36 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:48.105 09:23:36 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:48.105 09:23:36 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:48.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:48.105 09:23:36 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:48.105 09:23:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:48.370 [2024-11-15 09:23:36.625838] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:04:48.370 [2024-11-15 09:23:36.627063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57775 ] 00:04:48.940 [2024-11-15 09:23:37.243136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.940 [2024-11-15 09:23:37.367009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.878 00:04:49.878 INFO: shutting down applications... 00:04:49.878 09:23:38 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:49.878 09:23:38 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:49.878 09:23:38 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:49.878 09:23:38 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:49.878 09:23:38 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:49.878 09:23:38 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:49.878 09:23:38 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:49.878 09:23:38 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57775 ]] 00:04:49.878 09:23:38 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57775 00:04:49.878 09:23:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:49.878 09:23:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.878 09:23:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57775 00:04:49.878 09:23:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:50.448 09:23:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:50.448 09:23:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.448 09:23:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57775 00:04:50.448 09:23:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:50.707 09:23:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:50.707 09:23:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.707 09:23:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57775 00:04:50.707 09:23:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.277 09:23:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.277 09:23:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.277 09:23:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57775 00:04:51.277 09:23:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.846 09:23:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.846 09:23:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.846 09:23:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57775 00:04:51.846 09:23:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.415 09:23:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.415 09:23:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.415 09:23:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57775 00:04:52.415 09:23:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.983 09:23:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.983 09:23:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.983 09:23:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57775 00:04:52.983 09:23:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.242 09:23:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.243 09:23:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.243 09:23:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57775 00:04:53.243 09:23:41 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:53.243 09:23:41 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:53.243 09:23:41 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:53.243 SPDK target shutdown done 00:04:53.243 09:23:41 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:53.243 Success 00:04:53.243 09:23:41 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:53.243 ************************************ 00:04:53.243 END TEST json_config_extra_key 00:04:53.243 ************************************ 00:04:53.243 00:04:53.243 real 0m5.427s 00:04:53.243 user 0m4.744s 00:04:53.243 sys 0m0.850s 00:04:53.243 09:23:41 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:53.243 09:23:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:53.502 09:23:41 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:53.502 09:23:41 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:53.502 09:23:41 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:53.502 09:23:41 -- common/autotest_common.sh@10 -- # set +x 00:04:53.502 ************************************ 00:04:53.502 START TEST alias_rpc 00:04:53.502 ************************************ 00:04:53.502 09:23:41 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:53.502 * Looking for test storage... 00:04:53.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:53.502 09:23:41 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:53.502 09:23:41 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:53.502 09:23:41 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:53.502 09:23:41 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:53.502 09:23:41 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.502 09:23:41 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.502 09:23:41 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.502 09:23:41 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.502 09:23:41 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.502 09:23:41 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.502 09:23:41 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.502 09:23:41 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.502 09:23:41 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.502 09:23:41 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.502 09:23:41 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.502 09:23:41 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:53.502 09:23:41 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:53.502 09:23:41 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.502 09:23:41 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.502 09:23:41 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:53.502 09:23:41 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:53.502 09:23:41 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.502 09:23:41 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:53.502 09:23:41 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.502 09:23:41 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:53.502 09:23:41 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:53.502 09:23:41 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.502 09:23:41 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:53.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.503 09:23:41 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.503 09:23:41 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.503 09:23:41 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.503 09:23:41 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:53.503 09:23:41 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.503 09:23:41 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:53.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.503 --rc genhtml_branch_coverage=1 00:04:53.503 --rc genhtml_function_coverage=1 00:04:53.503 --rc genhtml_legend=1 00:04:53.503 --rc geninfo_all_blocks=1 00:04:53.503 --rc geninfo_unexecuted_blocks=1 00:04:53.503 00:04:53.503 ' 00:04:53.503 09:23:41 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:53.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.503 --rc genhtml_branch_coverage=1 00:04:53.503 --rc genhtml_function_coverage=1 00:04:53.503 --rc genhtml_legend=1 00:04:53.503 --rc geninfo_all_blocks=1 00:04:53.503 --rc geninfo_unexecuted_blocks=1 00:04:53.503 00:04:53.503 ' 00:04:53.503 09:23:41 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:53.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.503 --rc genhtml_branch_coverage=1 00:04:53.503 --rc genhtml_function_coverage=1 00:04:53.503 --rc genhtml_legend=1 00:04:53.503 --rc geninfo_all_blocks=1 00:04:53.503 --rc geninfo_unexecuted_blocks=1 00:04:53.503 00:04:53.503 ' 00:04:53.503 09:23:41 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:53.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.503 --rc genhtml_branch_coverage=1 00:04:53.503 --rc genhtml_function_coverage=1 00:04:53.503 --rc genhtml_legend=1 00:04:53.503 --rc geninfo_all_blocks=1 00:04:53.503 --rc geninfo_unexecuted_blocks=1 00:04:53.503 00:04:53.503 ' 00:04:53.503 09:23:41 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:53.503 09:23:41 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57893 00:04:53.503 09:23:41 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57893 00:04:53.503 09:23:41 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57893 ']' 00:04:53.503 09:23:41 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.503 09:23:41 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:53.503 09:23:41 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.503 09:23:41 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.503 09:23:41 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:53.503 09:23:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.762 [2024-11-15 09:23:42.067609] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:04:53.762 [2024-11-15 09:23:42.067823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57893 ] 00:04:54.020 [2024-11-15 09:23:42.233574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.020 [2024-11-15 09:23:42.362707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.957 09:23:43 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:54.957 09:23:43 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:54.957 09:23:43 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:55.248 09:23:43 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57893 00:04:55.248 09:23:43 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57893 ']' 00:04:55.248 09:23:43 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57893 00:04:55.248 09:23:43 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:55.248 09:23:43 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:55.248 09:23:43 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57893 00:04:55.248 09:23:43 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:55.248 09:23:43 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:55.248 09:23:43 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57893' 00:04:55.248 killing process with pid 57893 00:04:55.248 09:23:43 alias_rpc -- common/autotest_common.sh@971 -- # kill 57893 00:04:55.248 09:23:43 alias_rpc -- common/autotest_common.sh@976 -- # wait 57893 00:04:58.539 00:04:58.539 real 0m4.578s 00:04:58.539 user 0m4.689s 00:04:58.539 sys 0m0.611s 00:04:58.539 09:23:46 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.539 09:23:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.539 ************************************ 00:04:58.539 END TEST alias_rpc 00:04:58.539 ************************************ 00:04:58.539 09:23:46 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:58.539 09:23:46 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:58.539 09:23:46 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:58.539 09:23:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:58.539 09:23:46 -- common/autotest_common.sh@10 -- # set +x 00:04:58.539 ************************************ 00:04:58.539 START TEST spdkcli_tcp 00:04:58.539 ************************************ 00:04:58.539 09:23:46 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:58.539 * Looking for test storage... 00:04:58.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:58.539 09:23:46 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:58.539 09:23:46 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:58.539 09:23:46 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:58.539 09:23:46 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:58.539 09:23:46 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.539 09:23:46 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.539 09:23:46 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.539 09:23:46 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.539 09:23:46 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.539 09:23:46 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.539 09:23:46 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.539 09:23:46 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.539 09:23:46 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.539 09:23:46 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.540 09:23:46 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.540 09:23:46 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:58.540 09:23:46 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:58.540 09:23:46 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.540 09:23:46 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.540 09:23:46 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:58.540 09:23:46 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:58.540 09:23:46 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.540 09:23:46 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:58.540 09:23:46 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.540 09:23:46 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:58.540 09:23:46 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:58.540 09:23:46 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.540 09:23:46 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:58.540 09:23:46 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.540 09:23:46 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.540 09:23:46 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.540 09:23:46 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:58.540 09:23:46 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.540 09:23:46 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:58.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.540 --rc genhtml_branch_coverage=1 00:04:58.540 --rc genhtml_function_coverage=1 00:04:58.540 --rc genhtml_legend=1 00:04:58.540 --rc geninfo_all_blocks=1 00:04:58.540 --rc geninfo_unexecuted_blocks=1 00:04:58.540 00:04:58.540 ' 00:04:58.540 09:23:46 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:58.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.540 --rc genhtml_branch_coverage=1 00:04:58.540 --rc genhtml_function_coverage=1 00:04:58.540 --rc genhtml_legend=1 00:04:58.540 --rc geninfo_all_blocks=1 00:04:58.540 --rc geninfo_unexecuted_blocks=1 00:04:58.540 00:04:58.540 ' 00:04:58.540 09:23:46 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:58.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.540 --rc genhtml_branch_coverage=1 00:04:58.540 --rc genhtml_function_coverage=1 00:04:58.540 --rc genhtml_legend=1 00:04:58.540 --rc geninfo_all_blocks=1 00:04:58.540 --rc geninfo_unexecuted_blocks=1 00:04:58.540 00:04:58.540 ' 00:04:58.540 09:23:46 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:58.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.540 --rc genhtml_branch_coverage=1 00:04:58.540 --rc genhtml_function_coverage=1 00:04:58.540 --rc genhtml_legend=1 00:04:58.540 --rc geninfo_all_blocks=1 00:04:58.540 --rc geninfo_unexecuted_blocks=1 00:04:58.540 00:04:58.540 ' 00:04:58.540 09:23:46 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:58.540 09:23:46 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:58.540 09:23:46 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:58.540 09:23:46 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:58.540 09:23:46 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:58.540 09:23:46 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:58.540 09:23:46 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:58.540 09:23:46 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:58.540 09:23:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.540 09:23:46 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58006 00:04:58.540 09:23:46 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:58.540 09:23:46 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58006 00:04:58.540 09:23:46 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 58006 ']' 00:04:58.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.540 09:23:46 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.540 09:23:46 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:58.540 09:23:46 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.540 09:23:46 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:58.540 09:23:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.540 [2024-11-15 09:23:46.738421] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:04:58.540 [2024-11-15 09:23:46.738547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58006 ] 00:04:58.540 [2024-11-15 09:23:46.917726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:58.799 [2024-11-15 09:23:47.081707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.799 [2024-11-15 09:23:47.081750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.177 09:23:48 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:00.178 09:23:48 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:05:00.178 09:23:48 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:00.178 09:23:48 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58034 00:05:00.178 09:23:48 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:00.437 [ 00:05:00.437 "bdev_malloc_delete", 00:05:00.438 "bdev_malloc_create", 00:05:00.438 "bdev_null_resize", 00:05:00.438 "bdev_null_delete", 00:05:00.438 "bdev_null_create", 00:05:00.438 "bdev_nvme_cuse_unregister", 00:05:00.438 "bdev_nvme_cuse_register", 00:05:00.438 "bdev_opal_new_user", 00:05:00.438 "bdev_opal_set_lock_state", 00:05:00.438 "bdev_opal_delete", 00:05:00.438 "bdev_opal_get_info", 00:05:00.438 "bdev_opal_create", 00:05:00.438 "bdev_nvme_opal_revert", 00:05:00.438 "bdev_nvme_opal_init", 00:05:00.438 "bdev_nvme_send_cmd", 00:05:00.438 "bdev_nvme_set_keys", 00:05:00.438 "bdev_nvme_get_path_iostat", 00:05:00.438 "bdev_nvme_get_mdns_discovery_info", 00:05:00.438 "bdev_nvme_stop_mdns_discovery", 00:05:00.438 "bdev_nvme_start_mdns_discovery", 00:05:00.438 "bdev_nvme_set_multipath_policy", 00:05:00.438 "bdev_nvme_set_preferred_path", 00:05:00.438 "bdev_nvme_get_io_paths", 00:05:00.438 "bdev_nvme_remove_error_injection", 00:05:00.438 "bdev_nvme_add_error_injection", 00:05:00.438 "bdev_nvme_get_discovery_info", 00:05:00.438 "bdev_nvme_stop_discovery", 00:05:00.438 "bdev_nvme_start_discovery", 00:05:00.438 "bdev_nvme_get_controller_health_info", 00:05:00.438 "bdev_nvme_disable_controller", 00:05:00.438 "bdev_nvme_enable_controller", 00:05:00.438 "bdev_nvme_reset_controller", 00:05:00.438 "bdev_nvme_get_transport_statistics", 00:05:00.438 "bdev_nvme_apply_firmware", 00:05:00.438 "bdev_nvme_detach_controller", 00:05:00.438 "bdev_nvme_get_controllers", 00:05:00.438 "bdev_nvme_attach_controller", 00:05:00.438 "bdev_nvme_set_hotplug", 00:05:00.438 "bdev_nvme_set_options", 00:05:00.438 "bdev_passthru_delete", 00:05:00.438 "bdev_passthru_create", 00:05:00.438 "bdev_lvol_set_parent_bdev", 00:05:00.438 "bdev_lvol_set_parent", 00:05:00.438 "bdev_lvol_check_shallow_copy", 00:05:00.438 "bdev_lvol_start_shallow_copy", 00:05:00.438 "bdev_lvol_grow_lvstore", 00:05:00.438 "bdev_lvol_get_lvols", 00:05:00.438 "bdev_lvol_get_lvstores", 00:05:00.438 "bdev_lvol_delete", 00:05:00.438 "bdev_lvol_set_read_only", 00:05:00.438 "bdev_lvol_resize", 00:05:00.438 "bdev_lvol_decouple_parent", 00:05:00.438 "bdev_lvol_inflate", 00:05:00.438 "bdev_lvol_rename", 00:05:00.438 "bdev_lvol_clone_bdev", 00:05:00.438 "bdev_lvol_clone", 00:05:00.438 "bdev_lvol_snapshot", 00:05:00.438 "bdev_lvol_create", 00:05:00.438 "bdev_lvol_delete_lvstore", 00:05:00.438 "bdev_lvol_rename_lvstore", 00:05:00.438 "bdev_lvol_create_lvstore", 00:05:00.438 "bdev_raid_set_options", 00:05:00.438 "bdev_raid_remove_base_bdev", 00:05:00.438 "bdev_raid_add_base_bdev", 00:05:00.438 "bdev_raid_delete", 00:05:00.438 "bdev_raid_create", 00:05:00.438 "bdev_raid_get_bdevs", 00:05:00.438 "bdev_error_inject_error", 00:05:00.438 "bdev_error_delete", 00:05:00.438 "bdev_error_create", 00:05:00.438 "bdev_split_delete", 00:05:00.438 "bdev_split_create", 00:05:00.438 "bdev_delay_delete", 00:05:00.438 "bdev_delay_create", 00:05:00.438 "bdev_delay_update_latency", 00:05:00.438 "bdev_zone_block_delete", 00:05:00.438 "bdev_zone_block_create", 00:05:00.438 "blobfs_create", 00:05:00.438 "blobfs_detect", 00:05:00.438 "blobfs_set_cache_size", 00:05:00.438 "bdev_aio_delete", 00:05:00.438 "bdev_aio_rescan", 00:05:00.438 "bdev_aio_create", 00:05:00.438 "bdev_ftl_set_property", 00:05:00.438 "bdev_ftl_get_properties", 00:05:00.438 "bdev_ftl_get_stats", 00:05:00.438 "bdev_ftl_unmap", 00:05:00.438 "bdev_ftl_unload", 00:05:00.438 "bdev_ftl_delete", 00:05:00.438 "bdev_ftl_load", 00:05:00.438 "bdev_ftl_create", 00:05:00.438 "bdev_virtio_attach_controller", 00:05:00.438 "bdev_virtio_scsi_get_devices", 00:05:00.438 "bdev_virtio_detach_controller", 00:05:00.438 "bdev_virtio_blk_set_hotplug", 00:05:00.438 "bdev_iscsi_delete", 00:05:00.438 "bdev_iscsi_create", 00:05:00.438 "bdev_iscsi_set_options", 00:05:00.438 "accel_error_inject_error", 00:05:00.438 "ioat_scan_accel_module", 00:05:00.438 "dsa_scan_accel_module", 00:05:00.438 "iaa_scan_accel_module", 00:05:00.438 "keyring_file_remove_key", 00:05:00.438 "keyring_file_add_key", 00:05:00.438 "keyring_linux_set_options", 00:05:00.438 "fsdev_aio_delete", 00:05:00.438 "fsdev_aio_create", 00:05:00.438 "iscsi_get_histogram", 00:05:00.438 "iscsi_enable_histogram", 00:05:00.438 "iscsi_set_options", 00:05:00.438 "iscsi_get_auth_groups", 00:05:00.438 "iscsi_auth_group_remove_secret", 00:05:00.438 "iscsi_auth_group_add_secret", 00:05:00.438 "iscsi_delete_auth_group", 00:05:00.438 "iscsi_create_auth_group", 00:05:00.438 "iscsi_set_discovery_auth", 00:05:00.438 "iscsi_get_options", 00:05:00.438 "iscsi_target_node_request_logout", 00:05:00.438 "iscsi_target_node_set_redirect", 00:05:00.438 "iscsi_target_node_set_auth", 00:05:00.438 "iscsi_target_node_add_lun", 00:05:00.438 "iscsi_get_stats", 00:05:00.438 "iscsi_get_connections", 00:05:00.438 "iscsi_portal_group_set_auth", 00:05:00.438 "iscsi_start_portal_group", 00:05:00.438 "iscsi_delete_portal_group", 00:05:00.438 "iscsi_create_portal_group", 00:05:00.438 "iscsi_get_portal_groups", 00:05:00.438 "iscsi_delete_target_node", 00:05:00.438 "iscsi_target_node_remove_pg_ig_maps", 00:05:00.438 "iscsi_target_node_add_pg_ig_maps", 00:05:00.438 "iscsi_create_target_node", 00:05:00.438 "iscsi_get_target_nodes", 00:05:00.438 "iscsi_delete_initiator_group", 00:05:00.438 "iscsi_initiator_group_remove_initiators", 00:05:00.438 "iscsi_initiator_group_add_initiators", 00:05:00.438 "iscsi_create_initiator_group", 00:05:00.438 "iscsi_get_initiator_groups", 00:05:00.438 "nvmf_set_crdt", 00:05:00.438 "nvmf_set_config", 00:05:00.438 "nvmf_set_max_subsystems", 00:05:00.438 "nvmf_stop_mdns_prr", 00:05:00.438 "nvmf_publish_mdns_prr", 00:05:00.438 "nvmf_subsystem_get_listeners", 00:05:00.438 "nvmf_subsystem_get_qpairs", 00:05:00.438 "nvmf_subsystem_get_controllers", 00:05:00.438 "nvmf_get_stats", 00:05:00.438 "nvmf_get_transports", 00:05:00.438 "nvmf_create_transport", 00:05:00.438 "nvmf_get_targets", 00:05:00.438 "nvmf_delete_target", 00:05:00.438 "nvmf_create_target", 00:05:00.438 "nvmf_subsystem_allow_any_host", 00:05:00.438 "nvmf_subsystem_set_keys", 00:05:00.438 "nvmf_subsystem_remove_host", 00:05:00.438 "nvmf_subsystem_add_host", 00:05:00.438 "nvmf_ns_remove_host", 00:05:00.438 "nvmf_ns_add_host", 00:05:00.438 "nvmf_subsystem_remove_ns", 00:05:00.438 "nvmf_subsystem_set_ns_ana_group", 00:05:00.438 "nvmf_subsystem_add_ns", 00:05:00.438 "nvmf_subsystem_listener_set_ana_state", 00:05:00.438 "nvmf_discovery_get_referrals", 00:05:00.438 "nvmf_discovery_remove_referral", 00:05:00.438 "nvmf_discovery_add_referral", 00:05:00.438 "nvmf_subsystem_remove_listener", 00:05:00.438 "nvmf_subsystem_add_listener", 00:05:00.438 "nvmf_delete_subsystem", 00:05:00.438 "nvmf_create_subsystem", 00:05:00.438 "nvmf_get_subsystems", 00:05:00.438 "env_dpdk_get_mem_stats", 00:05:00.438 "nbd_get_disks", 00:05:00.438 "nbd_stop_disk", 00:05:00.438 "nbd_start_disk", 00:05:00.438 "ublk_recover_disk", 00:05:00.438 "ublk_get_disks", 00:05:00.438 "ublk_stop_disk", 00:05:00.438 "ublk_start_disk", 00:05:00.438 "ublk_destroy_target", 00:05:00.438 "ublk_create_target", 00:05:00.438 "virtio_blk_create_transport", 00:05:00.438 "virtio_blk_get_transports", 00:05:00.438 "vhost_controller_set_coalescing", 00:05:00.438 "vhost_get_controllers", 00:05:00.438 "vhost_delete_controller", 00:05:00.438 "vhost_create_blk_controller", 00:05:00.438 "vhost_scsi_controller_remove_target", 00:05:00.438 "vhost_scsi_controller_add_target", 00:05:00.438 "vhost_start_scsi_controller", 00:05:00.438 "vhost_create_scsi_controller", 00:05:00.438 "thread_set_cpumask", 00:05:00.438 "scheduler_set_options", 00:05:00.438 "framework_get_governor", 00:05:00.438 "framework_get_scheduler", 00:05:00.438 "framework_set_scheduler", 00:05:00.438 "framework_get_reactors", 00:05:00.438 "thread_get_io_channels", 00:05:00.438 "thread_get_pollers", 00:05:00.438 "thread_get_stats", 00:05:00.438 "framework_monitor_context_switch", 00:05:00.438 "spdk_kill_instance", 00:05:00.438 "log_enable_timestamps", 00:05:00.438 "log_get_flags", 00:05:00.438 "log_clear_flag", 00:05:00.438 "log_set_flag", 00:05:00.438 "log_get_level", 00:05:00.438 "log_set_level", 00:05:00.438 "log_get_print_level", 00:05:00.438 "log_set_print_level", 00:05:00.438 "framework_enable_cpumask_locks", 00:05:00.438 "framework_disable_cpumask_locks", 00:05:00.438 "framework_wait_init", 00:05:00.438 "framework_start_init", 00:05:00.438 "scsi_get_devices", 00:05:00.438 "bdev_get_histogram", 00:05:00.438 "bdev_enable_histogram", 00:05:00.438 "bdev_set_qos_limit", 00:05:00.438 "bdev_set_qd_sampling_period", 00:05:00.438 "bdev_get_bdevs", 00:05:00.438 "bdev_reset_iostat", 00:05:00.438 "bdev_get_iostat", 00:05:00.438 "bdev_examine", 00:05:00.438 "bdev_wait_for_examine", 00:05:00.438 "bdev_set_options", 00:05:00.438 "accel_get_stats", 00:05:00.438 "accel_set_options", 00:05:00.438 "accel_set_driver", 00:05:00.438 "accel_crypto_key_destroy", 00:05:00.438 "accel_crypto_keys_get", 00:05:00.438 "accel_crypto_key_create", 00:05:00.438 "accel_assign_opc", 00:05:00.438 "accel_get_module_info", 00:05:00.438 "accel_get_opc_assignments", 00:05:00.438 "vmd_rescan", 00:05:00.438 "vmd_remove_device", 00:05:00.438 "vmd_enable", 00:05:00.438 "sock_get_default_impl", 00:05:00.438 "sock_set_default_impl", 00:05:00.438 "sock_impl_set_options", 00:05:00.439 "sock_impl_get_options", 00:05:00.439 "iobuf_get_stats", 00:05:00.439 "iobuf_set_options", 00:05:00.439 "keyring_get_keys", 00:05:00.439 "framework_get_pci_devices", 00:05:00.439 "framework_get_config", 00:05:00.439 "framework_get_subsystems", 00:05:00.439 "fsdev_set_opts", 00:05:00.439 "fsdev_get_opts", 00:05:00.439 "trace_get_info", 00:05:00.439 "trace_get_tpoint_group_mask", 00:05:00.439 "trace_disable_tpoint_group", 00:05:00.439 "trace_enable_tpoint_group", 00:05:00.439 "trace_clear_tpoint_mask", 00:05:00.439 "trace_set_tpoint_mask", 00:05:00.439 "notify_get_notifications", 00:05:00.439 "notify_get_types", 00:05:00.439 "spdk_get_version", 00:05:00.439 "rpc_get_methods" 00:05:00.439 ] 00:05:00.439 09:23:48 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:00.439 09:23:48 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:00.439 09:23:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.439 09:23:48 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:00.439 09:23:48 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58006 00:05:00.439 09:23:48 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 58006 ']' 00:05:00.439 09:23:48 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 58006 00:05:00.439 09:23:48 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:05:00.439 09:23:48 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:00.439 09:23:48 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58006 00:05:00.439 09:23:48 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:00.439 09:23:48 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:00.439 09:23:48 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58006' 00:05:00.439 killing process with pid 58006 00:05:00.439 09:23:48 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 58006 00:05:00.439 09:23:48 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 58006 00:05:03.731 00:05:03.731 real 0m5.442s 00:05:03.731 user 0m9.825s 00:05:03.731 sys 0m0.877s 00:05:03.731 09:23:51 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.731 09:23:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.731 ************************************ 00:05:03.731 END TEST spdkcli_tcp 00:05:03.731 ************************************ 00:05:03.731 09:23:51 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.731 09:23:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:03.731 09:23:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.731 09:23:51 -- common/autotest_common.sh@10 -- # set +x 00:05:03.731 ************************************ 00:05:03.731 START TEST dpdk_mem_utility 00:05:03.731 ************************************ 00:05:03.731 09:23:51 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.731 * Looking for test storage... 00:05:03.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:03.731 09:23:52 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:03.731 09:23:52 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:03.731 09:23:52 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:03.731 09:23:52 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:03.731 09:23:52 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.731 09:23:52 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.731 09:23:52 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.731 09:23:52 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.731 09:23:52 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.731 09:23:52 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.731 09:23:52 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.732 09:23:52 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.732 09:23:52 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.732 09:23:52 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.732 09:23:52 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.732 09:23:52 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:03.732 09:23:52 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:03.732 09:23:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.732 09:23:52 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.732 09:23:52 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:03.732 09:23:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:03.732 09:23:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.732 09:23:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:03.732 09:23:52 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.732 09:23:52 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:03.732 09:23:52 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:03.732 09:23:52 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.732 09:23:52 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:03.732 09:23:52 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.732 09:23:52 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.732 09:23:52 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.732 09:23:52 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:03.732 09:23:52 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.732 09:23:52 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:03.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.732 --rc genhtml_branch_coverage=1 00:05:03.732 --rc genhtml_function_coverage=1 00:05:03.732 --rc genhtml_legend=1 00:05:03.732 --rc geninfo_all_blocks=1 00:05:03.732 --rc geninfo_unexecuted_blocks=1 00:05:03.732 00:05:03.732 ' 00:05:03.732 09:23:52 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:03.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.732 --rc genhtml_branch_coverage=1 00:05:03.732 --rc genhtml_function_coverage=1 00:05:03.732 --rc genhtml_legend=1 00:05:03.732 --rc geninfo_all_blocks=1 00:05:03.732 --rc geninfo_unexecuted_blocks=1 00:05:03.732 00:05:03.732 ' 00:05:03.732 09:23:52 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:03.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.732 --rc genhtml_branch_coverage=1 00:05:03.732 --rc genhtml_function_coverage=1 00:05:03.732 --rc genhtml_legend=1 00:05:03.732 --rc geninfo_all_blocks=1 00:05:03.732 --rc geninfo_unexecuted_blocks=1 00:05:03.732 00:05:03.732 ' 00:05:03.732 09:23:52 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:03.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.732 --rc genhtml_branch_coverage=1 00:05:03.732 --rc genhtml_function_coverage=1 00:05:03.732 --rc genhtml_legend=1 00:05:03.732 --rc geninfo_all_blocks=1 00:05:03.732 --rc geninfo_unexecuted_blocks=1 00:05:03.732 00:05:03.732 ' 00:05:03.732 09:23:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:03.732 09:23:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58139 00:05:03.732 09:23:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.732 09:23:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58139 00:05:03.732 09:23:52 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 58139 ']' 00:05:03.732 09:23:52 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.732 09:23:52 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:03.732 09:23:52 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.732 09:23:52 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:03.732 09:23:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.992 [2024-11-15 09:23:52.243120] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:03.992 [2024-11-15 09:23:52.243383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58139 ] 00:05:03.992 [2024-11-15 09:23:52.422588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.252 [2024-11-15 09:23:52.577448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.633 09:23:53 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:05.633 09:23:53 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:05:05.633 09:23:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:05.633 09:23:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:05.633 09:23:53 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.633 09:23:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:05.633 { 00:05:05.633 "filename": "/tmp/spdk_mem_dump.txt" 00:05:05.633 } 00:05:05.633 09:23:53 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.633 09:23:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:05.634 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:05.634 1 heaps totaling size 816.000000 MiB 00:05:05.634 size: 816.000000 MiB heap id: 0 00:05:05.634 end heaps---------- 00:05:05.634 9 mempools totaling size 595.772034 MiB 00:05:05.634 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:05.634 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:05.634 size: 92.545471 MiB name: bdev_io_58139 00:05:05.634 size: 50.003479 MiB name: msgpool_58139 00:05:05.634 size: 36.509338 MiB name: fsdev_io_58139 00:05:05.634 size: 21.763794 MiB name: PDU_Pool 00:05:05.634 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:05.634 size: 4.133484 MiB name: evtpool_58139 00:05:05.634 size: 0.026123 MiB name: Session_Pool 00:05:05.634 end mempools------- 00:05:05.634 6 memzones totaling size 4.142822 MiB 00:05:05.634 size: 1.000366 MiB name: RG_ring_0_58139 00:05:05.634 size: 1.000366 MiB name: RG_ring_1_58139 00:05:05.634 size: 1.000366 MiB name: RG_ring_4_58139 00:05:05.634 size: 1.000366 MiB name: RG_ring_5_58139 00:05:05.634 size: 0.125366 MiB name: RG_ring_2_58139 00:05:05.634 size: 0.015991 MiB name: RG_ring_3_58139 00:05:05.634 end memzones------- 00:05:05.634 09:23:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:05.634 heap id: 0 total size: 816.000000 MiB number of busy elements: 308 number of free elements: 18 00:05:05.634 list of free elements. size: 16.793091 MiB 00:05:05.634 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:05.634 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:05.634 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:05.634 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:05.634 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:05.634 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:05.634 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:05.634 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:05.634 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:05.634 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:05.634 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:05.634 element at address: 0x20001ac00000 with size: 0.563660 MiB 00:05:05.634 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:05.634 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:05.634 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:05.634 element at address: 0x200012c00000 with size: 0.443237 MiB 00:05:05.634 element at address: 0x200028000000 with size: 0.390442 MiB 00:05:05.634 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:05.634 list of standard malloc elements. size: 199.286011 MiB 00:05:05.634 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:05.634 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:05.634 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:05.634 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:05.634 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:05.634 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:05.634 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:05.634 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:05.634 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:05.634 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:05.634 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:05.634 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:05.634 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:05.634 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:05.634 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:05.634 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:05.634 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200012c71780 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:05.635 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:05.635 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200028063f40 with size: 0.000244 MiB 00:05:05.635 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806af80 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:05.635 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:05.636 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:05.636 list of memzone associated elements. size: 599.920898 MiB 00:05:05.636 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:05.636 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:05.636 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:05.636 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:05.636 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:05.636 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58139_0 00:05:05.636 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:05.636 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58139_0 00:05:05.636 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:05.636 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58139_0 00:05:05.636 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:05.636 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:05.636 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:05.636 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:05.636 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:05.636 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58139_0 00:05:05.636 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:05.636 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58139 00:05:05.636 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:05.636 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58139 00:05:05.636 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:05.636 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:05.636 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:05.636 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:05.636 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:05.636 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:05.636 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:05.636 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:05.636 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:05.636 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58139 00:05:05.636 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:05.636 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58139 00:05:05.636 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:05.636 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58139 00:05:05.636 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:05.636 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58139 00:05:05.636 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:05.636 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58139 00:05:05.636 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:05.636 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58139 00:05:05.636 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:05.636 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:05.636 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:05.636 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:05.636 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:05.636 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:05.636 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:05.636 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58139 00:05:05.636 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:05.636 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58139 00:05:05.636 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:05.636 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:05.636 element at address: 0x200028064140 with size: 0.023804 MiB 00:05:05.636 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:05.636 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:05.636 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58139 00:05:05.636 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:05:05.636 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:05.636 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:05.636 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58139 00:05:05.636 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:05.636 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58139 00:05:05.636 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:05.636 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58139 00:05:05.636 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:05:05.636 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:05.636 09:23:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:05.636 09:23:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58139 00:05:05.636 09:23:53 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 58139 ']' 00:05:05.636 09:23:53 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 58139 00:05:05.636 09:23:53 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:05:05.636 09:23:53 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:05.636 09:23:53 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58139 00:05:05.636 killing process with pid 58139 00:05:05.636 09:23:53 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:05.636 09:23:53 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:05.636 09:23:53 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58139' 00:05:05.636 09:23:53 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 58139 00:05:05.636 09:23:53 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 58139 00:05:08.932 ************************************ 00:05:08.932 END TEST dpdk_mem_utility 00:05:08.932 ************************************ 00:05:08.932 00:05:08.932 real 0m4.768s 00:05:08.932 user 0m4.565s 00:05:08.932 sys 0m0.801s 00:05:08.932 09:23:56 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:08.932 09:23:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:08.932 09:23:56 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:08.932 09:23:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:08.932 09:23:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:08.932 09:23:56 -- common/autotest_common.sh@10 -- # set +x 00:05:08.932 ************************************ 00:05:08.932 START TEST event 00:05:08.932 ************************************ 00:05:08.932 09:23:56 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:08.932 * Looking for test storage... 00:05:08.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:08.932 09:23:56 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:08.932 09:23:56 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:08.932 09:23:56 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:08.932 09:23:56 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:08.932 09:23:56 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.932 09:23:56 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.932 09:23:56 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.932 09:23:56 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.932 09:23:56 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.932 09:23:56 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.932 09:23:56 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.932 09:23:56 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.932 09:23:56 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.932 09:23:56 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.932 09:23:56 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.932 09:23:56 event -- scripts/common.sh@344 -- # case "$op" in 00:05:08.932 09:23:56 event -- scripts/common.sh@345 -- # : 1 00:05:08.932 09:23:56 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.932 09:23:56 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.932 09:23:56 event -- scripts/common.sh@365 -- # decimal 1 00:05:08.932 09:23:56 event -- scripts/common.sh@353 -- # local d=1 00:05:08.932 09:23:56 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.932 09:23:56 event -- scripts/common.sh@355 -- # echo 1 00:05:08.932 09:23:56 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.932 09:23:56 event -- scripts/common.sh@366 -- # decimal 2 00:05:08.932 09:23:56 event -- scripts/common.sh@353 -- # local d=2 00:05:08.932 09:23:56 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.932 09:23:56 event -- scripts/common.sh@355 -- # echo 2 00:05:08.932 09:23:56 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.932 09:23:56 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.932 09:23:56 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.932 09:23:56 event -- scripts/common.sh@368 -- # return 0 00:05:08.932 09:23:56 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.932 09:23:56 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:08.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.932 --rc genhtml_branch_coverage=1 00:05:08.932 --rc genhtml_function_coverage=1 00:05:08.932 --rc genhtml_legend=1 00:05:08.932 --rc geninfo_all_blocks=1 00:05:08.932 --rc geninfo_unexecuted_blocks=1 00:05:08.932 00:05:08.932 ' 00:05:08.932 09:23:56 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:08.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.932 --rc genhtml_branch_coverage=1 00:05:08.932 --rc genhtml_function_coverage=1 00:05:08.932 --rc genhtml_legend=1 00:05:08.932 --rc geninfo_all_blocks=1 00:05:08.932 --rc geninfo_unexecuted_blocks=1 00:05:08.932 00:05:08.932 ' 00:05:08.932 09:23:56 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:08.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.932 --rc genhtml_branch_coverage=1 00:05:08.932 --rc genhtml_function_coverage=1 00:05:08.932 --rc genhtml_legend=1 00:05:08.932 --rc geninfo_all_blocks=1 00:05:08.932 --rc geninfo_unexecuted_blocks=1 00:05:08.932 00:05:08.932 ' 00:05:08.932 09:23:56 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:08.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.932 --rc genhtml_branch_coverage=1 00:05:08.932 --rc genhtml_function_coverage=1 00:05:08.932 --rc genhtml_legend=1 00:05:08.932 --rc geninfo_all_blocks=1 00:05:08.932 --rc geninfo_unexecuted_blocks=1 00:05:08.932 00:05:08.932 ' 00:05:08.932 09:23:56 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:08.932 09:23:56 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:08.932 09:23:56 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:08.932 09:23:56 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:05:08.932 09:23:56 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:08.932 09:23:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.932 ************************************ 00:05:08.932 START TEST event_perf 00:05:08.932 ************************************ 00:05:08.932 09:23:56 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:08.932 Running I/O for 1 seconds...[2024-11-15 09:23:57.002887] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:08.932 [2024-11-15 09:23:57.003267] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58258 ] 00:05:08.932 [2024-11-15 09:23:57.198021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:08.932 [2024-11-15 09:23:57.389477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.932 [2024-11-15 09:23:57.389738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:08.932 [2024-11-15 09:23:57.389698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.932 [2024-11-15 09:23:57.389569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.343 Running I/O for 1 seconds... 00:05:10.343 lcore 0: 85799 00:05:10.343 lcore 1: 85802 00:05:10.343 lcore 2: 85799 00:05:10.343 lcore 3: 85802 00:05:10.343 done. 00:05:10.343 00:05:10.343 real 0m1.761s 00:05:10.343 user 0m4.444s 00:05:10.343 sys 0m0.178s 00:05:10.343 ************************************ 00:05:10.343 END TEST event_perf 00:05:10.343 ************************************ 00:05:10.343 09:23:58 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:10.343 09:23:58 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:10.343 09:23:58 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:10.343 09:23:58 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:10.343 09:23:58 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:10.343 09:23:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.343 ************************************ 00:05:10.343 START TEST event_reactor 00:05:10.343 ************************************ 00:05:10.343 09:23:58 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:10.602 [2024-11-15 09:23:58.814871] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:10.602 [2024-11-15 09:23:58.815020] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58303 ] 00:05:10.602 [2024-11-15 09:23:58.998207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.860 [2024-11-15 09:23:59.167540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.259 test_start 00:05:12.259 oneshot 00:05:12.259 tick 100 00:05:12.259 tick 100 00:05:12.259 tick 250 00:05:12.259 tick 100 00:05:12.259 tick 100 00:05:12.259 tick 100 00:05:12.259 tick 250 00:05:12.259 tick 500 00:05:12.259 tick 100 00:05:12.259 tick 100 00:05:12.259 tick 250 00:05:12.259 tick 100 00:05:12.259 tick 100 00:05:12.259 test_end 00:05:12.259 ************************************ 00:05:12.259 END TEST event_reactor 00:05:12.259 ************************************ 00:05:12.259 00:05:12.259 real 0m1.679s 00:05:12.259 user 0m1.449s 00:05:12.259 sys 0m0.120s 00:05:12.259 09:24:00 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:12.259 09:24:00 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:12.259 09:24:00 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:12.259 09:24:00 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:12.259 09:24:00 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:12.259 09:24:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.259 ************************************ 00:05:12.259 START TEST event_reactor_perf 00:05:12.259 ************************************ 00:05:12.259 09:24:00 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:12.259 [2024-11-15 09:24:00.541832] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:12.259 [2024-11-15 09:24:00.541988] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58335 ] 00:05:12.518 [2024-11-15 09:24:00.726251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.518 [2024-11-15 09:24:00.885467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.898 test_start 00:05:13.898 test_end 00:05:13.898 Performance: 297065 events per second 00:05:13.898 00:05:13.898 real 0m1.685s 00:05:13.898 user 0m1.450s 00:05:13.898 sys 0m0.123s 00:05:13.898 ************************************ 00:05:13.898 END TEST event_reactor_perf 00:05:13.898 ************************************ 00:05:13.898 09:24:02 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:13.898 09:24:02 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:13.898 09:24:02 event -- event/event.sh@49 -- # uname -s 00:05:13.898 09:24:02 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:13.898 09:24:02 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:13.898 09:24:02 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:13.898 09:24:02 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:13.898 09:24:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.898 ************************************ 00:05:13.898 START TEST event_scheduler 00:05:13.898 ************************************ 00:05:13.898 09:24:02 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:13.898 * Looking for test storage... 00:05:13.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:13.898 09:24:02 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:13.898 09:24:02 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:13.898 09:24:02 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:14.158 09:24:02 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.158 09:24:02 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:14.158 09:24:02 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.158 09:24:02 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:14.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.158 --rc genhtml_branch_coverage=1 00:05:14.158 --rc genhtml_function_coverage=1 00:05:14.158 --rc genhtml_legend=1 00:05:14.158 --rc geninfo_all_blocks=1 00:05:14.158 --rc geninfo_unexecuted_blocks=1 00:05:14.158 00:05:14.158 ' 00:05:14.158 09:24:02 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:14.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.158 --rc genhtml_branch_coverage=1 00:05:14.158 --rc genhtml_function_coverage=1 00:05:14.158 --rc genhtml_legend=1 00:05:14.158 --rc geninfo_all_blocks=1 00:05:14.158 --rc geninfo_unexecuted_blocks=1 00:05:14.158 00:05:14.158 ' 00:05:14.158 09:24:02 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:14.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.158 --rc genhtml_branch_coverage=1 00:05:14.158 --rc genhtml_function_coverage=1 00:05:14.158 --rc genhtml_legend=1 00:05:14.158 --rc geninfo_all_blocks=1 00:05:14.158 --rc geninfo_unexecuted_blocks=1 00:05:14.158 00:05:14.158 ' 00:05:14.158 09:24:02 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:14.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.158 --rc genhtml_branch_coverage=1 00:05:14.158 --rc genhtml_function_coverage=1 00:05:14.158 --rc genhtml_legend=1 00:05:14.158 --rc geninfo_all_blocks=1 00:05:14.158 --rc geninfo_unexecuted_blocks=1 00:05:14.158 00:05:14.158 ' 00:05:14.158 09:24:02 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:14.158 09:24:02 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58410 00:05:14.158 09:24:02 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:14.158 09:24:02 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.158 09:24:02 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58410 00:05:14.158 09:24:02 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58410 ']' 00:05:14.158 09:24:02 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.158 09:24:02 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:14.158 09:24:02 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.158 09:24:02 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:14.158 09:24:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:14.158 [2024-11-15 09:24:02.558907] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:14.158 [2024-11-15 09:24:02.559704] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58410 ] 00:05:14.418 [2024-11-15 09:24:02.749110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:14.678 [2024-11-15 09:24:02.926400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.678 [2024-11-15 09:24:02.926597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.678 [2024-11-15 09:24:02.926619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:14.678 [2024-11-15 09:24:02.926606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.246 09:24:03 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:15.246 09:24:03 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:05:15.246 09:24:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:15.246 09:24:03 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.246 09:24:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:15.246 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:15.246 POWER: Cannot set governor of lcore 0 to userspace 00:05:15.246 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:15.246 POWER: Cannot set governor of lcore 0 to performance 00:05:15.246 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:15.246 POWER: Cannot set governor of lcore 0 to userspace 00:05:15.246 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:15.246 POWER: Cannot set governor of lcore 0 to userspace 00:05:15.246 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:15.246 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:15.246 POWER: Unable to set Power Management Environment for lcore 0 00:05:15.246 [2024-11-15 09:24:03.512270] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:15.246 [2024-11-15 09:24:03.512330] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:15.247 [2024-11-15 09:24:03.512376] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:15.247 [2024-11-15 09:24:03.512444] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:15.247 [2024-11-15 09:24:03.512480] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:15.247 [2024-11-15 09:24:03.512525] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:15.247 09:24:03 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.247 09:24:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:15.247 09:24:03 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.247 09:24:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:15.504 [2024-11-15 09:24:03.942516] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:15.504 09:24:03 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.504 09:24:03 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:15.504 09:24:03 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:15.504 09:24:03 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:15.504 09:24:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:15.504 ************************************ 00:05:15.504 START TEST scheduler_create_thread 00:05:15.504 ************************************ 00:05:15.504 09:24:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:05:15.504 09:24:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:15.504 09:24:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.504 09:24:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.763 2 00:05:15.763 09:24:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.763 09:24:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:15.763 09:24:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.763 09:24:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.763 3 00:05:15.763 09:24:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.763 09:24:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:15.763 09:24:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.763 09:24:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.763 4 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.763 5 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.763 6 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.763 7 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.763 8 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.763 9 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.763 10 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.763 09:24:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.142 09:24:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.142 09:24:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:17.142 09:24:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:17.142 09:24:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.142 09:24:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.080 09:24:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.080 09:24:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:18.080 09:24:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.080 09:24:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.649 09:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.649 09:24:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:18.649 09:24:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:18.649 09:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.649 09:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.586 ************************************ 00:05:19.586 END TEST scheduler_create_thread 00:05:19.586 ************************************ 00:05:19.586 09:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.586 00:05:19.586 real 0m3.886s 00:05:19.586 user 0m0.031s 00:05:19.586 sys 0m0.010s 00:05:19.586 09:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:19.586 09:24:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.586 09:24:07 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:19.586 09:24:07 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58410 00:05:19.586 09:24:07 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58410 ']' 00:05:19.586 09:24:07 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58410 00:05:19.586 09:24:07 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:05:19.586 09:24:07 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:19.586 09:24:07 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58410 00:05:19.586 killing process with pid 58410 00:05:19.586 09:24:07 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:19.586 09:24:07 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:19.586 09:24:07 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58410' 00:05:19.586 09:24:07 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58410 00:05:19.586 09:24:07 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58410 00:05:19.845 [2024-11-15 09:24:08.226686] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:21.225 00:05:21.225 real 0m7.258s 00:05:21.225 user 0m14.896s 00:05:21.225 sys 0m0.648s 00:05:21.225 09:24:09 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:21.225 09:24:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:21.225 ************************************ 00:05:21.225 END TEST event_scheduler 00:05:21.225 ************************************ 00:05:21.225 09:24:09 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:21.225 09:24:09 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:21.225 09:24:09 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:21.225 09:24:09 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:21.225 09:24:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.225 ************************************ 00:05:21.225 START TEST app_repeat 00:05:21.225 ************************************ 00:05:21.225 09:24:09 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:05:21.225 09:24:09 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.225 09:24:09 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.225 09:24:09 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:21.225 09:24:09 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.225 09:24:09 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:21.225 09:24:09 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:21.225 09:24:09 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:21.225 09:24:09 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58538 00:05:21.225 09:24:09 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.225 09:24:09 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58538' 00:05:21.225 Process app_repeat pid: 58538 00:05:21.225 spdk_app_start Round 0 00:05:21.225 09:24:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:21.225 09:24:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:21.225 09:24:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58538 /var/tmp/spdk-nbd.sock 00:05:21.225 09:24:09 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:21.225 09:24:09 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58538 ']' 00:05:21.225 09:24:09 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.225 09:24:09 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:21.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.225 09:24:09 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.225 09:24:09 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:21.225 09:24:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:21.225 [2024-11-15 09:24:09.632120] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:21.225 [2024-11-15 09:24:09.632371] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58538 ] 00:05:21.485 [2024-11-15 09:24:09.799730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.745 [2024-11-15 09:24:09.972785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.745 [2024-11-15 09:24:09.972823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.313 09:24:10 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:22.313 09:24:10 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:22.313 09:24:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.571 Malloc0 00:05:22.571 09:24:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.138 Malloc1 00:05:23.138 09:24:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.138 09:24:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.138 09:24:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.138 09:24:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.138 09:24:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.138 09:24:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.138 09:24:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.139 09:24:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.139 09:24:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.139 09:24:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.139 09:24:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.139 09:24:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.139 09:24:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.139 09:24:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.139 09:24:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.139 09:24:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.397 /dev/nbd0 00:05:23.397 09:24:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.397 09:24:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.397 09:24:11 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:23.397 09:24:11 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:23.397 09:24:11 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:23.397 09:24:11 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:23.397 09:24:11 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:23.397 09:24:11 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:23.397 09:24:11 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:23.397 09:24:11 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:23.397 09:24:11 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.397 1+0 records in 00:05:23.397 1+0 records out 00:05:23.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521647 s, 7.9 MB/s 00:05:23.397 09:24:11 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.397 09:24:11 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:23.397 09:24:11 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.397 09:24:11 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:23.397 09:24:11 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:23.397 09:24:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.397 09:24:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.397 09:24:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:23.965 /dev/nbd1 00:05:23.965 09:24:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:23.965 09:24:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:23.965 09:24:12 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:23.965 09:24:12 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:23.965 09:24:12 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:23.965 09:24:12 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:23.965 09:24:12 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:23.965 09:24:12 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:23.965 09:24:12 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:23.965 09:24:12 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:23.965 09:24:12 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.965 1+0 records in 00:05:23.965 1+0 records out 00:05:23.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419012 s, 9.8 MB/s 00:05:23.965 09:24:12 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.965 09:24:12 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:23.965 09:24:12 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.965 09:24:12 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:23.965 09:24:12 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:23.965 09:24:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.965 09:24:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.965 09:24:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.965 09:24:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.965 09:24:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.224 09:24:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.224 { 00:05:24.224 "nbd_device": "/dev/nbd0", 00:05:24.224 "bdev_name": "Malloc0" 00:05:24.224 }, 00:05:24.224 { 00:05:24.224 "nbd_device": "/dev/nbd1", 00:05:24.224 "bdev_name": "Malloc1" 00:05:24.224 } 00:05:24.224 ]' 00:05:24.224 09:24:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.224 09:24:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.224 { 00:05:24.224 "nbd_device": "/dev/nbd0", 00:05:24.224 "bdev_name": "Malloc0" 00:05:24.224 }, 00:05:24.224 { 00:05:24.224 "nbd_device": "/dev/nbd1", 00:05:24.224 "bdev_name": "Malloc1" 00:05:24.224 } 00:05:24.224 ]' 00:05:24.224 09:24:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.224 /dev/nbd1' 00:05:24.224 09:24:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.224 09:24:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.224 /dev/nbd1' 00:05:24.224 09:24:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.224 09:24:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.224 09:24:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.224 09:24:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.224 09:24:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.224 09:24:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.224 09:24:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.224 09:24:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.224 09:24:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.224 09:24:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.224 09:24:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.224 256+0 records in 00:05:24.224 256+0 records out 00:05:24.224 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00601322 s, 174 MB/s 00:05:24.224 09:24:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.224 09:24:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.484 256+0 records in 00:05:24.484 256+0 records out 00:05:24.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252497 s, 41.5 MB/s 00:05:24.484 09:24:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.484 09:24:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.484 256+0 records in 00:05:24.484 256+0 records out 00:05:24.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0369828 s, 28.4 MB/s 00:05:24.484 09:24:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.484 09:24:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.484 09:24:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.484 09:24:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.484 09:24:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.484 09:24:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.484 09:24:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.484 09:24:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.484 09:24:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.484 09:24:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.484 09:24:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.484 09:24:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.484 09:24:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.484 09:24:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.484 09:24:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.484 09:24:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.484 09:24:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:24.484 09:24:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.484 09:24:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.743 09:24:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.743 09:24:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.743 09:24:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.743 09:24:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.743 09:24:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.743 09:24:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.743 09:24:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.743 09:24:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.743 09:24:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.743 09:24:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:25.001 09:24:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:25.002 09:24:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:25.002 09:24:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:25.002 09:24:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.002 09:24:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.002 09:24:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:25.002 09:24:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.002 09:24:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.002 09:24:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.002 09:24:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.002 09:24:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.260 09:24:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.260 09:24:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.260 09:24:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.260 09:24:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.260 09:24:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.260 09:24:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.260 09:24:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:25.260 09:24:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.260 09:24:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.260 09:24:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:25.260 09:24:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:25.260 09:24:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:25.260 09:24:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:25.827 09:24:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:27.816 [2024-11-15 09:24:15.768470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.816 [2024-11-15 09:24:15.933897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.816 [2024-11-15 09:24:15.933917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.816 [2024-11-15 09:24:16.204396] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:27.816 [2024-11-15 09:24:16.204516] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:29.193 spdk_app_start Round 1 00:05:29.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.193 09:24:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:29.193 09:24:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:29.193 09:24:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58538 /var/tmp/spdk-nbd.sock 00:05:29.193 09:24:17 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58538 ']' 00:05:29.193 09:24:17 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.193 09:24:17 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:29.193 09:24:17 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.193 09:24:17 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:29.193 09:24:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.193 09:24:17 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:29.193 09:24:17 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:29.193 09:24:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.451 Malloc0 00:05:29.451 09:24:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.709 Malloc1 00:05:29.709 09:24:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.709 09:24:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.709 09:24:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.709 09:24:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:29.709 09:24:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.709 09:24:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:29.709 09:24:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.709 09:24:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.709 09:24:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.709 09:24:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:29.709 09:24:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.709 09:24:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:29.709 09:24:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:29.709 09:24:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:29.709 09:24:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.709 09:24:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:29.968 /dev/nbd0 00:05:29.968 09:24:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:29.968 09:24:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:29.968 09:24:18 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:29.968 09:24:18 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:29.968 09:24:18 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:29.968 09:24:18 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:29.968 09:24:18 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:29.968 09:24:18 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:29.968 09:24:18 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:29.968 09:24:18 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:29.968 09:24:18 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.968 1+0 records in 00:05:29.968 1+0 records out 00:05:29.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303799 s, 13.5 MB/s 00:05:29.968 09:24:18 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.226 09:24:18 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:30.226 09:24:18 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.226 09:24:18 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:30.226 09:24:18 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:30.226 09:24:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.226 09:24:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.226 09:24:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.226 /dev/nbd1 00:05:30.226 09:24:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.226 09:24:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.226 09:24:18 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:30.484 09:24:18 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:30.484 09:24:18 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:30.484 09:24:18 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:30.484 09:24:18 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:30.484 09:24:18 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:30.484 09:24:18 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:30.484 09:24:18 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:30.484 09:24:18 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.484 1+0 records in 00:05:30.484 1+0 records out 00:05:30.484 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296093 s, 13.8 MB/s 00:05:30.484 09:24:18 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.485 09:24:18 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:30.485 09:24:18 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.485 09:24:18 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:30.485 09:24:18 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:30.485 09:24:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.485 09:24:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.485 09:24:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.485 09:24:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.485 09:24:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.485 09:24:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.485 { 00:05:30.485 "nbd_device": "/dev/nbd0", 00:05:30.485 "bdev_name": "Malloc0" 00:05:30.485 }, 00:05:30.485 { 00:05:30.485 "nbd_device": "/dev/nbd1", 00:05:30.485 "bdev_name": "Malloc1" 00:05:30.485 } 00:05:30.485 ]' 00:05:30.485 09:24:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.485 { 00:05:30.485 "nbd_device": "/dev/nbd0", 00:05:30.485 "bdev_name": "Malloc0" 00:05:30.485 }, 00:05:30.485 { 00:05:30.485 "nbd_device": "/dev/nbd1", 00:05:30.485 "bdev_name": "Malloc1" 00:05:30.485 } 00:05:30.485 ]' 00:05:30.485 09:24:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.744 09:24:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:30.744 /dev/nbd1' 00:05:30.744 09:24:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:30.744 /dev/nbd1' 00:05:30.744 09:24:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.744 09:24:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.744 09:24:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.744 09:24:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.744 09:24:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.744 09:24:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.744 09:24:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.744 09:24:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.744 09:24:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.744 09:24:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.744 09:24:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.744 09:24:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.744 256+0 records in 00:05:30.744 256+0 records out 00:05:30.744 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00819515 s, 128 MB/s 00:05:30.744 09:24:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.744 09:24:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.744 256+0 records in 00:05:30.744 256+0 records out 00:05:30.744 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259691 s, 40.4 MB/s 00:05:30.744 09:24:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.744 09:24:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:30.744 256+0 records in 00:05:30.744 256+0 records out 00:05:30.744 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278256 s, 37.7 MB/s 00:05:30.744 09:24:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:30.744 09:24:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.744 09:24:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.744 09:24:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:30.744 09:24:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.744 09:24:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:30.744 09:24:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:30.744 09:24:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.744 09:24:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:30.744 09:24:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.744 09:24:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:30.744 09:24:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.744 09:24:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:30.744 09:24:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.744 09:24:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.744 09:24:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.744 09:24:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:30.744 09:24:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.744 09:24:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.003 09:24:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.003 09:24:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.003 09:24:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.003 09:24:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.003 09:24:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.003 09:24:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.003 09:24:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.003 09:24:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.003 09:24:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.003 09:24:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.262 09:24:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.262 09:24:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.262 09:24:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.262 09:24:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.262 09:24:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.262 09:24:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.262 09:24:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.262 09:24:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.262 09:24:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.262 09:24:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.262 09:24:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.521 09:24:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.521 09:24:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.521 09:24:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.521 09:24:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.521 09:24:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.521 09:24:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.521 09:24:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:31.521 09:24:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.521 09:24:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.521 09:24:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.521 09:24:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.521 09:24:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.521 09:24:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:32.087 09:24:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:33.480 [2024-11-15 09:24:21.587995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.480 [2024-11-15 09:24:21.734222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.480 [2024-11-15 09:24:21.734243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.739 [2024-11-15 09:24:21.975471] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:33.739 [2024-11-15 09:24:21.975604] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.113 09:24:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.113 spdk_app_start Round 2 00:05:35.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.113 09:24:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:35.113 09:24:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58538 /var/tmp/spdk-nbd.sock 00:05:35.113 09:24:23 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58538 ']' 00:05:35.113 09:24:23 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.113 09:24:23 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:35.114 09:24:23 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.114 09:24:23 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:35.114 09:24:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.114 09:24:23 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:35.114 09:24:23 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:35.114 09:24:23 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.371 Malloc0 00:05:35.371 09:24:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.637 Malloc1 00:05:35.907 09:24:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.907 09:24:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.907 09:24:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.907 09:24:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.907 09:24:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.907 09:24:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.907 09:24:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.907 09:24:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.907 09:24:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.907 09:24:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.907 09:24:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.907 09:24:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.907 09:24:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:35.907 09:24:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.907 09:24:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.907 09:24:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.907 /dev/nbd0 00:05:35.907 09:24:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.907 09:24:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.907 09:24:24 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:35.907 09:24:24 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:35.907 09:24:24 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:35.907 09:24:24 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:35.907 09:24:24 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:35.907 09:24:24 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:35.907 09:24:24 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:35.907 09:24:24 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:35.907 09:24:24 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.907 1+0 records in 00:05:35.907 1+0 records out 00:05:35.907 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373142 s, 11.0 MB/s 00:05:35.907 09:24:24 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:35.907 09:24:24 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:35.907 09:24:24 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.165 09:24:24 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:36.165 09:24:24 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:36.165 09:24:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.165 09:24:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.166 09:24:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.166 /dev/nbd1 00:05:36.166 09:24:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.166 09:24:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.166 09:24:24 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:36.166 09:24:24 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:36.166 09:24:24 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:36.166 09:24:24 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:36.166 09:24:24 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:36.166 09:24:24 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:36.166 09:24:24 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:36.166 09:24:24 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:36.166 09:24:24 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.425 1+0 records in 00:05:36.425 1+0 records out 00:05:36.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391093 s, 10.5 MB/s 00:05:36.425 09:24:24 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.425 09:24:24 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:36.425 09:24:24 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:36.425 09:24:24 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:36.425 09:24:24 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:36.425 09:24:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.425 09:24:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.425 09:24:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.425 09:24:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.425 09:24:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.425 09:24:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.425 { 00:05:36.425 "nbd_device": "/dev/nbd0", 00:05:36.425 "bdev_name": "Malloc0" 00:05:36.425 }, 00:05:36.425 { 00:05:36.425 "nbd_device": "/dev/nbd1", 00:05:36.425 "bdev_name": "Malloc1" 00:05:36.425 } 00:05:36.425 ]' 00:05:36.425 09:24:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.425 { 00:05:36.425 "nbd_device": "/dev/nbd0", 00:05:36.425 "bdev_name": "Malloc0" 00:05:36.425 }, 00:05:36.425 { 00:05:36.425 "nbd_device": "/dev/nbd1", 00:05:36.425 "bdev_name": "Malloc1" 00:05:36.425 } 00:05:36.425 ]' 00:05:36.425 09:24:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.683 09:24:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.683 /dev/nbd1' 00:05:36.683 09:24:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.683 /dev/nbd1' 00:05:36.683 09:24:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.683 09:24:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.683 09:24:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.683 09:24:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.683 09:24:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.683 09:24:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.683 09:24:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.683 09:24:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.683 09:24:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.683 09:24:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.683 09:24:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.683 09:24:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.683 256+0 records in 00:05:36.683 256+0 records out 00:05:36.683 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00440107 s, 238 MB/s 00:05:36.683 09:24:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.683 09:24:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.683 256+0 records in 00:05:36.683 256+0 records out 00:05:36.683 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285644 s, 36.7 MB/s 00:05:36.683 09:24:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.683 09:24:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.683 256+0 records in 00:05:36.683 256+0 records out 00:05:36.683 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282929 s, 37.1 MB/s 00:05:36.683 09:24:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.683 09:24:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.683 09:24:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.683 09:24:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.683 09:24:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.683 09:24:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.683 09:24:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.683 09:24:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.683 09:24:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.683 09:24:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.683 09:24:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.683 09:24:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:36.683 09:24:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.683 09:24:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.683 09:24:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.683 09:24:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.683 09:24:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.683 09:24:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.683 09:24:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.941 09:24:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.941 09:24:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.941 09:24:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.941 09:24:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.941 09:24:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.941 09:24:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.941 09:24:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.941 09:24:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.941 09:24:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.941 09:24:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.199 09:24:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.199 09:24:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.199 09:24:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.199 09:24:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.199 09:24:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.199 09:24:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.199 09:24:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.199 09:24:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.199 09:24:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.199 09:24:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.199 09:24:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.457 09:24:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.457 09:24:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.457 09:24:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.457 09:24:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.457 09:24:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.457 09:24:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.457 09:24:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:37.457 09:24:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.457 09:24:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.457 09:24:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.457 09:24:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.457 09:24:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.457 09:24:25 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:38.023 09:24:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:39.399 [2024-11-15 09:24:27.466563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.399 [2024-11-15 09:24:27.583112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.399 [2024-11-15 09:24:27.583113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.399 [2024-11-15 09:24:27.792474] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:39.399 [2024-11-15 09:24:27.792584] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:41.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.301 09:24:29 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58538 /var/tmp/spdk-nbd.sock 00:05:41.301 09:24:29 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58538 ']' 00:05:41.301 09:24:29 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.301 09:24:29 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:41.301 09:24:29 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.301 09:24:29 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:41.301 09:24:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.301 09:24:29 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:41.301 09:24:29 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:41.301 09:24:29 event.app_repeat -- event/event.sh@39 -- # killprocess 58538 00:05:41.301 09:24:29 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58538 ']' 00:05:41.301 09:24:29 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58538 00:05:41.301 09:24:29 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:05:41.301 09:24:29 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:41.301 09:24:29 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58538 00:05:41.301 killing process with pid 58538 00:05:41.301 09:24:29 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:41.301 09:24:29 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:41.301 09:24:29 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58538' 00:05:41.301 09:24:29 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58538 00:05:41.301 09:24:29 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58538 00:05:42.238 spdk_app_start is called in Round 0. 00:05:42.238 Shutdown signal received, stop current app iteration 00:05:42.238 Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 reinitialization... 00:05:42.238 spdk_app_start is called in Round 1. 00:05:42.238 Shutdown signal received, stop current app iteration 00:05:42.238 Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 reinitialization... 00:05:42.238 spdk_app_start is called in Round 2. 00:05:42.238 Shutdown signal received, stop current app iteration 00:05:42.238 Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 reinitialization... 00:05:42.238 spdk_app_start is called in Round 3. 00:05:42.238 Shutdown signal received, stop current app iteration 00:05:42.238 09:24:30 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:42.238 09:24:30 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:42.238 00:05:42.238 real 0m21.084s 00:05:42.238 user 0m45.645s 00:05:42.238 sys 0m3.186s 00:05:42.238 09:24:30 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:42.238 09:24:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.238 ************************************ 00:05:42.238 END TEST app_repeat 00:05:42.238 ************************************ 00:05:42.238 09:24:30 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:42.238 09:24:30 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:42.238 09:24:30 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:42.238 09:24:30 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:42.238 09:24:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.497 ************************************ 00:05:42.497 START TEST cpu_locks 00:05:42.497 ************************************ 00:05:42.497 09:24:30 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:42.497 * Looking for test storage... 00:05:42.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:42.497 09:24:30 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:42.497 09:24:30 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:42.497 09:24:30 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:42.497 09:24:30 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.497 09:24:30 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:42.497 09:24:30 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.497 09:24:30 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:42.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.497 --rc genhtml_branch_coverage=1 00:05:42.497 --rc genhtml_function_coverage=1 00:05:42.497 --rc genhtml_legend=1 00:05:42.497 --rc geninfo_all_blocks=1 00:05:42.497 --rc geninfo_unexecuted_blocks=1 00:05:42.497 00:05:42.497 ' 00:05:42.497 09:24:30 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:42.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.497 --rc genhtml_branch_coverage=1 00:05:42.497 --rc genhtml_function_coverage=1 00:05:42.497 --rc genhtml_legend=1 00:05:42.497 --rc geninfo_all_blocks=1 00:05:42.497 --rc geninfo_unexecuted_blocks=1 00:05:42.497 00:05:42.497 ' 00:05:42.497 09:24:30 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:42.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.497 --rc genhtml_branch_coverage=1 00:05:42.497 --rc genhtml_function_coverage=1 00:05:42.497 --rc genhtml_legend=1 00:05:42.497 --rc geninfo_all_blocks=1 00:05:42.497 --rc geninfo_unexecuted_blocks=1 00:05:42.497 00:05:42.497 ' 00:05:42.497 09:24:30 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:42.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.497 --rc genhtml_branch_coverage=1 00:05:42.497 --rc genhtml_function_coverage=1 00:05:42.497 --rc genhtml_legend=1 00:05:42.497 --rc geninfo_all_blocks=1 00:05:42.497 --rc geninfo_unexecuted_blocks=1 00:05:42.497 00:05:42.497 ' 00:05:42.497 09:24:30 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:42.497 09:24:30 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:42.497 09:24:30 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:42.497 09:24:30 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:42.497 09:24:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:42.497 09:24:30 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:42.497 09:24:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.497 ************************************ 00:05:42.497 START TEST default_locks 00:05:42.497 ************************************ 00:05:42.497 09:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:05:42.497 09:24:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59004 00:05:42.497 09:24:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.497 09:24:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59004 00:05:42.497 09:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59004 ']' 00:05:42.497 09:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.497 09:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:42.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.497 09:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.497 09:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:42.497 09:24:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.756 [2024-11-15 09:24:31.036834] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:42.756 [2024-11-15 09:24:31.037757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59004 ] 00:05:42.756 [2024-11-15 09:24:31.217568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.074 [2024-11-15 09:24:31.342330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.023 09:24:32 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:44.023 09:24:32 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:05:44.023 09:24:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59004 00:05:44.023 09:24:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.023 09:24:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59004 00:05:44.283 09:24:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59004 00:05:44.283 09:24:32 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 59004 ']' 00:05:44.283 09:24:32 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 59004 00:05:44.283 09:24:32 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:44.283 09:24:32 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:44.283 09:24:32 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59004 00:05:44.283 09:24:32 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:44.283 09:24:32 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:44.283 killing process with pid 59004 00:05:44.283 09:24:32 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59004' 00:05:44.283 09:24:32 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 59004 00:05:44.283 09:24:32 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 59004 00:05:46.831 09:24:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59004 00:05:46.831 09:24:35 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:46.831 09:24:35 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59004 00:05:46.831 09:24:35 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:47.091 09:24:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.091 09:24:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:47.091 09:24:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.091 09:24:35 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59004 00:05:47.091 09:24:35 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59004 ']' 00:05:47.091 09:24:35 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.091 ERROR: process (pid: 59004) is no longer running 00:05:47.091 09:24:35 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:47.091 09:24:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.091 09:24:35 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:47.091 09:24:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.091 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59004) - No such process 00:05:47.091 09:24:35 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:47.091 09:24:35 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:47.091 09:24:35 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:47.091 09:24:35 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.091 09:24:35 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:47.091 09:24:35 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.091 09:24:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:47.091 09:24:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:47.091 09:24:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:47.091 09:24:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:47.091 00:05:47.091 real 0m4.394s 00:05:47.091 user 0m4.335s 00:05:47.091 sys 0m0.677s 00:05:47.091 09:24:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:47.091 09:24:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.091 ************************************ 00:05:47.091 END TEST default_locks 00:05:47.091 ************************************ 00:05:47.091 09:24:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:47.091 09:24:35 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:47.091 09:24:35 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:47.091 09:24:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.091 ************************************ 00:05:47.091 START TEST default_locks_via_rpc 00:05:47.091 ************************************ 00:05:47.091 09:24:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:47.091 09:24:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59079 00:05:47.091 09:24:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.091 09:24:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59079 00:05:47.091 09:24:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59079 ']' 00:05:47.091 09:24:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.091 09:24:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:47.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.091 09:24:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.091 09:24:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:47.091 09:24:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.091 [2024-11-15 09:24:35.480244] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:47.091 [2024-11-15 09:24:35.480391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59079 ] 00:05:47.351 [2024-11-15 09:24:35.656147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.351 [2024-11-15 09:24:35.782668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.288 09:24:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:48.288 09:24:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:48.288 09:24:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:48.288 09:24:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.288 09:24:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.288 09:24:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.288 09:24:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:48.288 09:24:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:48.288 09:24:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:48.288 09:24:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:48.288 09:24:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:48.288 09:24:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.288 09:24:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.288 09:24:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.288 09:24:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59079 00:05:48.288 09:24:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59079 00:05:48.288 09:24:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.857 09:24:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59079 00:05:48.857 09:24:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 59079 ']' 00:05:48.857 09:24:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 59079 00:05:48.857 09:24:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:05:48.857 09:24:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:48.857 09:24:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59079 00:05:48.857 killing process with pid 59079 00:05:48.857 09:24:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:48.857 09:24:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:48.857 09:24:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59079' 00:05:48.857 09:24:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 59079 00:05:48.857 09:24:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 59079 00:05:51.412 00:05:51.412 real 0m4.294s 00:05:51.412 user 0m4.282s 00:05:51.412 sys 0m0.625s 00:05:51.412 09:24:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:51.412 ************************************ 00:05:51.412 END TEST default_locks_via_rpc 00:05:51.412 ************************************ 00:05:51.412 09:24:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.412 09:24:39 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:51.412 09:24:39 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:51.412 09:24:39 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:51.412 09:24:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.412 ************************************ 00:05:51.412 START TEST non_locking_app_on_locked_coremask 00:05:51.412 ************************************ 00:05:51.412 09:24:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:05:51.412 09:24:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59166 00:05:51.412 09:24:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.412 09:24:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59166 /var/tmp/spdk.sock 00:05:51.412 09:24:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59166 ']' 00:05:51.412 09:24:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.412 09:24:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:51.412 09:24:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.412 09:24:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:51.412 09:24:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.412 [2024-11-15 09:24:39.847902] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:51.412 [2024-11-15 09:24:39.848180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59166 ] 00:05:51.669 [2024-11-15 09:24:40.029349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.928 [2024-11-15 09:24:40.158411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.865 09:24:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:52.865 09:24:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:52.865 09:24:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59183 00:05:52.865 09:24:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:52.865 09:24:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59183 /var/tmp/spdk2.sock 00:05:52.865 09:24:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59183 ']' 00:05:52.865 09:24:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.865 09:24:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:52.865 09:24:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.865 09:24:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:52.865 09:24:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.865 [2024-11-15 09:24:41.210006] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:05:52.865 [2024-11-15 09:24:41.210272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59183 ] 00:05:53.124 [2024-11-15 09:24:41.392135] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:53.124 [2024-11-15 09:24:41.392217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.383 [2024-11-15 09:24:41.644533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.918 09:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:55.918 09:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:55.918 09:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59166 00:05:55.918 09:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59166 00:05:55.918 09:24:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.918 09:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59166 00:05:55.918 09:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59166 ']' 00:05:55.918 09:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59166 00:05:55.918 09:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:55.918 09:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:55.918 09:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59166 00:05:55.918 killing process with pid 59166 00:05:55.918 09:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:55.918 09:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:55.918 09:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59166' 00:05:55.918 09:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59166 00:05:55.918 09:24:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59166 00:06:01.219 09:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59183 00:06:01.219 09:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59183 ']' 00:06:01.219 09:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59183 00:06:01.219 09:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:01.219 09:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:01.219 09:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59183 00:06:01.219 killing process with pid 59183 00:06:01.219 09:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:01.219 09:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:01.219 09:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59183' 00:06:01.219 09:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59183 00:06:01.219 09:24:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59183 00:06:03.753 ************************************ 00:06:03.753 END TEST non_locking_app_on_locked_coremask 00:06:03.753 ************************************ 00:06:03.753 00:06:03.753 real 0m12.175s 00:06:03.753 user 0m12.452s 00:06:03.753 sys 0m1.231s 00:06:03.753 09:24:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:03.753 09:24:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.753 09:24:51 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:03.753 09:24:51 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:03.753 09:24:51 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:03.753 09:24:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.753 ************************************ 00:06:03.753 START TEST locking_app_on_unlocked_coremask 00:06:03.753 ************************************ 00:06:03.753 09:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:06:03.753 09:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59337 00:06:03.753 09:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59337 /var/tmp/spdk.sock 00:06:03.753 09:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:03.753 09:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59337 ']' 00:06:03.753 09:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.753 09:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:03.753 09:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.753 09:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:03.753 09:24:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.753 [2024-11-15 09:24:52.100783] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:03.753 [2024-11-15 09:24:52.101086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59337 ] 00:06:04.012 [2024-11-15 09:24:52.286128] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.012 [2024-11-15 09:24:52.286283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.012 [2024-11-15 09:24:52.408681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.947 09:24:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:04.947 09:24:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:04.947 09:24:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59353 00:06:04.947 09:24:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:04.947 09:24:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59353 /var/tmp/spdk2.sock 00:06:04.947 09:24:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59353 ']' 00:06:04.947 09:24:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.947 09:24:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:04.947 09:24:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.947 09:24:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:04.947 09:24:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.206 [2024-11-15 09:24:53.448862] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:05.207 [2024-11-15 09:24:53.449100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59353 ] 00:06:05.207 [2024-11-15 09:24:53.627151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.771 [2024-11-15 09:24:53.942709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.673 09:24:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:07.673 09:24:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:07.673 09:24:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59353 00:06:07.673 09:24:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59353 00:06:07.673 09:24:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.621 09:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59337 00:06:08.621 09:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59337 ']' 00:06:08.621 09:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59337 00:06:08.621 09:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:08.621 09:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:08.621 09:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59337 00:06:08.621 killing process with pid 59337 00:06:08.621 09:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:08.621 09:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:08.621 09:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59337' 00:06:08.621 09:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59337 00:06:08.621 09:24:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59337 00:06:15.179 09:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59353 00:06:15.179 09:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59353 ']' 00:06:15.179 09:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59353 00:06:15.179 09:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:15.179 09:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:15.179 09:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59353 00:06:15.179 killing process with pid 59353 00:06:15.179 09:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:15.179 09:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:15.179 09:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59353' 00:06:15.179 09:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59353 00:06:15.179 09:25:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59353 00:06:17.082 00:06:17.082 real 0m13.238s 00:06:17.082 user 0m13.365s 00:06:17.082 sys 0m1.689s 00:06:17.082 09:25:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:17.082 09:25:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.082 ************************************ 00:06:17.082 END TEST locking_app_on_unlocked_coremask 00:06:17.082 ************************************ 00:06:17.082 09:25:05 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:17.082 09:25:05 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:17.082 09:25:05 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:17.082 09:25:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.082 ************************************ 00:06:17.082 START TEST locking_app_on_locked_coremask 00:06:17.082 ************************************ 00:06:17.082 09:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:06:17.082 09:25:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59517 00:06:17.082 09:25:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.082 09:25:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59517 /var/tmp/spdk.sock 00:06:17.082 09:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59517 ']' 00:06:17.082 09:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.082 09:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:17.082 09:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.082 09:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:17.082 09:25:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.082 [2024-11-15 09:25:05.396239] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:17.082 [2024-11-15 09:25:05.396448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59517 ] 00:06:17.340 [2024-11-15 09:25:05.575674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.340 [2024-11-15 09:25:05.724727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.711 09:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:18.712 09:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:18.712 09:25:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59539 00:06:18.712 09:25:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:18.712 09:25:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59539 /var/tmp/spdk2.sock 00:06:18.712 09:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:18.712 09:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59539 /var/tmp/spdk2.sock 00:06:18.712 09:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:18.712 09:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.712 09:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:18.712 09:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.712 09:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59539 /var/tmp/spdk2.sock 00:06:18.712 09:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59539 ']' 00:06:18.712 09:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.712 09:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:18.712 09:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.712 09:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:18.712 09:25:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.712 [2024-11-15 09:25:06.919823] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:18.712 [2024-11-15 09:25:06.920234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59539 ] 00:06:18.712 [2024-11-15 09:25:07.120536] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59517 has claimed it. 00:06:18.712 [2024-11-15 09:25:07.120656] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:19.277 ERROR: process (pid: 59539) is no longer running 00:06:19.277 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59539) - No such process 00:06:19.277 09:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:19.277 09:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:19.277 09:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:19.277 09:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.277 09:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:19.277 09:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.277 09:25:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59517 00:06:19.277 09:25:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59517 00:06:19.277 09:25:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.535 09:25:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59517 00:06:19.535 09:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59517 ']' 00:06:19.535 09:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59517 00:06:19.535 09:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:19.535 09:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:19.536 09:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59517 00:06:19.536 09:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:19.536 09:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:19.536 09:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59517' 00:06:19.536 killing process with pid 59517 00:06:19.536 09:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59517 00:06:19.536 09:25:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59517 00:06:22.814 00:06:22.814 real 0m5.603s 00:06:22.814 user 0m5.670s 00:06:22.814 sys 0m0.951s 00:06:22.814 09:25:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:22.814 09:25:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.814 ************************************ 00:06:22.814 END TEST locking_app_on_locked_coremask 00:06:22.814 ************************************ 00:06:22.814 09:25:10 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:22.814 09:25:10 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:22.814 09:25:10 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:22.814 09:25:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.814 ************************************ 00:06:22.814 START TEST locking_overlapped_coremask 00:06:22.814 ************************************ 00:06:22.814 09:25:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:06:22.814 09:25:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59614 00:06:22.814 09:25:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:22.814 09:25:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59614 /var/tmp/spdk.sock 00:06:22.814 09:25:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59614 ']' 00:06:22.814 09:25:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.814 09:25:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:22.814 09:25:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.814 09:25:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:22.814 09:25:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.814 [2024-11-15 09:25:11.055417] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:22.814 [2024-11-15 09:25:11.055540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59614 ] 00:06:22.814 [2024-11-15 09:25:11.235014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.071 [2024-11-15 09:25:11.395303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.071 [2024-11-15 09:25:11.395474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.071 [2024-11-15 09:25:11.395515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.454 09:25:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:24.454 09:25:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:24.454 09:25:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59633 00:06:24.454 09:25:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:24.454 09:25:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59633 /var/tmp/spdk2.sock 00:06:24.454 09:25:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:24.454 09:25:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59633 /var/tmp/spdk2.sock 00:06:24.454 09:25:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:24.454 09:25:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.454 09:25:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:24.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.454 09:25:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.454 09:25:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59633 /var/tmp/spdk2.sock 00:06:24.454 09:25:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59633 ']' 00:06:24.454 09:25:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.454 09:25:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:24.454 09:25:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.454 09:25:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:24.454 09:25:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.454 [2024-11-15 09:25:12.674238] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:24.454 [2024-11-15 09:25:12.674540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59633 ] 00:06:24.454 [2024-11-15 09:25:12.870414] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59614 has claimed it. 00:06:24.454 [2024-11-15 09:25:12.870527] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:25.018 ERROR: process (pid: 59633) is no longer running 00:06:25.018 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59633) - No such process 00:06:25.018 09:25:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:25.018 09:25:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:25.018 09:25:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:25.018 09:25:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:25.018 09:25:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:25.018 09:25:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:25.018 09:25:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:25.018 09:25:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:25.018 09:25:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:25.018 09:25:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:25.018 09:25:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59614 00:06:25.018 09:25:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 59614 ']' 00:06:25.018 09:25:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 59614 00:06:25.018 09:25:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:06:25.018 09:25:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:25.019 09:25:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59614 00:06:25.019 09:25:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:25.019 09:25:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:25.019 09:25:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59614' 00:06:25.019 killing process with pid 59614 00:06:25.019 09:25:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 59614 00:06:25.019 09:25:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 59614 00:06:28.304 00:06:28.304 real 0m5.448s 00:06:28.304 user 0m14.743s 00:06:28.304 sys 0m0.828s 00:06:28.304 09:25:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:28.304 09:25:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.304 ************************************ 00:06:28.304 END TEST locking_overlapped_coremask 00:06:28.304 ************************************ 00:06:28.304 09:25:16 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:28.304 09:25:16 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:28.304 09:25:16 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:28.304 09:25:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.304 ************************************ 00:06:28.304 START TEST locking_overlapped_coremask_via_rpc 00:06:28.304 ************************************ 00:06:28.304 09:25:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:06:28.304 09:25:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59707 00:06:28.304 09:25:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:28.304 09:25:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59707 /var/tmp/spdk.sock 00:06:28.304 09:25:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59707 ']' 00:06:28.304 09:25:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.304 09:25:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:28.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.304 09:25:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.304 09:25:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:28.304 09:25:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.304 [2024-11-15 09:25:16.585404] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:28.304 [2024-11-15 09:25:16.585615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59707 ] 00:06:28.304 [2024-11-15 09:25:16.766157] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.304 [2024-11-15 09:25:16.766224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.563 [2024-11-15 09:25:16.907275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.563 [2024-11-15 09:25:16.907450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.563 [2024-11-15 09:25:16.907493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.494 09:25:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:29.494 09:25:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:29.494 09:25:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59731 00:06:29.494 09:25:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:29.494 09:25:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59731 /var/tmp/spdk2.sock 00:06:29.494 09:25:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59731 ']' 00:06:29.494 09:25:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.494 09:25:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:29.494 09:25:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.494 09:25:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:29.494 09:25:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.751 [2024-11-15 09:25:18.078624] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:29.751 [2024-11-15 09:25:18.078860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59731 ] 00:06:30.009 [2024-11-15 09:25:18.268784] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.009 [2024-11-15 09:25:18.268876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.267 [2024-11-15 09:25:18.590957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.267 [2024-11-15 09:25:18.593975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.267 [2024-11-15 09:25:18.594027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:32.794 09:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:32.794 09:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:32.794 09:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:32.794 09:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.794 09:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.794 09:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.794 09:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:32.794 09:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:32.794 09:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:32.794 09:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:32.794 09:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.794 09:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:32.794 09:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.794 09:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:32.794 09:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.794 09:25:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.794 [2024-11-15 09:25:21.006157] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59707 has claimed it. 00:06:32.794 request: 00:06:32.794 { 00:06:32.794 "method": "framework_enable_cpumask_locks", 00:06:32.794 "req_id": 1 00:06:32.794 } 00:06:32.794 Got JSON-RPC error response 00:06:32.794 response: 00:06:32.794 { 00:06:32.794 "code": -32603, 00:06:32.794 "message": "Failed to claim CPU core: 2" 00:06:32.794 } 00:06:32.794 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:32.794 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:32.794 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:32.794 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:32.794 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:32.794 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59707 /var/tmp/spdk.sock 00:06:32.794 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59707 ']' 00:06:32.794 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.794 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:32.794 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.795 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:32.795 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.056 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:33.056 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:33.056 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59731 /var/tmp/spdk2.sock 00:06:33.056 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59731 ']' 00:06:33.056 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.056 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:33.056 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.056 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:33.056 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.317 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:33.317 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:33.317 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:33.317 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:33.317 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:33.317 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:33.317 00:06:33.317 real 0m5.140s 00:06:33.317 user 0m1.551s 00:06:33.317 sys 0m0.246s 00:06:33.317 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:33.317 09:25:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.317 ************************************ 00:06:33.317 END TEST locking_overlapped_coremask_via_rpc 00:06:33.317 ************************************ 00:06:33.317 09:25:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:33.317 09:25:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59707 ]] 00:06:33.317 09:25:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59707 00:06:33.317 09:25:21 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59707 ']' 00:06:33.317 09:25:21 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59707 00:06:33.317 09:25:21 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:33.317 09:25:21 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:33.317 09:25:21 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59707 00:06:33.317 killing process with pid 59707 00:06:33.317 09:25:21 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:33.317 09:25:21 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:33.317 09:25:21 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59707' 00:06:33.317 09:25:21 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59707 00:06:33.317 09:25:21 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59707 00:06:36.602 09:25:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59731 ]] 00:06:36.602 09:25:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59731 00:06:36.602 09:25:24 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59731 ']' 00:06:36.603 09:25:24 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59731 00:06:36.603 09:25:24 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:36.603 09:25:24 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:36.603 09:25:24 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59731 00:06:36.603 killing process with pid 59731 00:06:36.603 09:25:24 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:36.603 09:25:24 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:36.603 09:25:24 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59731' 00:06:36.603 09:25:24 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59731 00:06:36.603 09:25:24 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59731 00:06:39.921 09:25:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:39.921 09:25:27 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:39.921 09:25:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59707 ]] 00:06:39.921 09:25:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59707 00:06:39.921 09:25:27 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59707 ']' 00:06:39.921 Process with pid 59707 is not found 00:06:39.921 09:25:27 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59707 00:06:39.921 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59707) - No such process 00:06:39.921 09:25:27 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59707 is not found' 00:06:39.921 09:25:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59731 ]] 00:06:39.921 09:25:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59731 00:06:39.921 09:25:27 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59731 ']' 00:06:39.921 09:25:27 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59731 00:06:39.921 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59731) - No such process 00:06:39.922 Process with pid 59731 is not found 00:06:39.922 09:25:27 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59731 is not found' 00:06:39.922 09:25:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:39.922 00:06:39.922 real 0m56.961s 00:06:39.922 user 1m39.057s 00:06:39.922 sys 0m7.747s 00:06:39.922 ************************************ 00:06:39.922 END TEST cpu_locks 00:06:39.922 ************************************ 00:06:39.922 09:25:27 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:39.922 09:25:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.922 ************************************ 00:06:39.922 END TEST event 00:06:39.922 ************************************ 00:06:39.922 00:06:39.922 real 1m31.003s 00:06:39.922 user 2m47.183s 00:06:39.922 sys 0m12.349s 00:06:39.922 09:25:27 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:39.922 09:25:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.922 09:25:27 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:39.922 09:25:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:39.922 09:25:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:39.922 09:25:27 -- common/autotest_common.sh@10 -- # set +x 00:06:39.922 ************************************ 00:06:39.922 START TEST thread 00:06:39.922 ************************************ 00:06:39.922 09:25:27 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:39.922 * Looking for test storage... 00:06:39.922 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:39.922 09:25:27 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:39.922 09:25:27 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:39.922 09:25:27 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:39.922 09:25:28 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:39.922 09:25:28 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.922 09:25:28 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.922 09:25:28 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.922 09:25:28 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.922 09:25:28 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.922 09:25:28 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.922 09:25:28 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.922 09:25:28 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.922 09:25:28 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.922 09:25:28 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.922 09:25:28 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.922 09:25:28 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:39.922 09:25:28 thread -- scripts/common.sh@345 -- # : 1 00:06:39.922 09:25:28 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.922 09:25:28 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.922 09:25:28 thread -- scripts/common.sh@365 -- # decimal 1 00:06:39.922 09:25:28 thread -- scripts/common.sh@353 -- # local d=1 00:06:39.922 09:25:28 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.922 09:25:28 thread -- scripts/common.sh@355 -- # echo 1 00:06:39.922 09:25:28 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.922 09:25:28 thread -- scripts/common.sh@366 -- # decimal 2 00:06:39.922 09:25:28 thread -- scripts/common.sh@353 -- # local d=2 00:06:39.922 09:25:28 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.922 09:25:28 thread -- scripts/common.sh@355 -- # echo 2 00:06:39.922 09:25:28 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.922 09:25:28 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.922 09:25:28 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.922 09:25:28 thread -- scripts/common.sh@368 -- # return 0 00:06:39.922 09:25:28 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.922 09:25:28 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:39.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.922 --rc genhtml_branch_coverage=1 00:06:39.922 --rc genhtml_function_coverage=1 00:06:39.922 --rc genhtml_legend=1 00:06:39.922 --rc geninfo_all_blocks=1 00:06:39.922 --rc geninfo_unexecuted_blocks=1 00:06:39.922 00:06:39.922 ' 00:06:39.922 09:25:28 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:39.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.922 --rc genhtml_branch_coverage=1 00:06:39.922 --rc genhtml_function_coverage=1 00:06:39.922 --rc genhtml_legend=1 00:06:39.922 --rc geninfo_all_blocks=1 00:06:39.922 --rc geninfo_unexecuted_blocks=1 00:06:39.922 00:06:39.922 ' 00:06:39.922 09:25:28 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:39.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.922 --rc genhtml_branch_coverage=1 00:06:39.922 --rc genhtml_function_coverage=1 00:06:39.922 --rc genhtml_legend=1 00:06:39.922 --rc geninfo_all_blocks=1 00:06:39.922 --rc geninfo_unexecuted_blocks=1 00:06:39.922 00:06:39.922 ' 00:06:39.922 09:25:28 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:39.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.922 --rc genhtml_branch_coverage=1 00:06:39.922 --rc genhtml_function_coverage=1 00:06:39.922 --rc genhtml_legend=1 00:06:39.922 --rc geninfo_all_blocks=1 00:06:39.922 --rc geninfo_unexecuted_blocks=1 00:06:39.922 00:06:39.922 ' 00:06:39.922 09:25:28 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:39.922 09:25:28 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:39.922 09:25:28 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:39.922 09:25:28 thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.922 ************************************ 00:06:39.922 START TEST thread_poller_perf 00:06:39.922 ************************************ 00:06:39.922 09:25:28 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:39.922 [2024-11-15 09:25:28.098821] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:39.922 [2024-11-15 09:25:28.098958] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59950 ] 00:06:39.922 [2024-11-15 09:25:28.277201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.180 [2024-11-15 09:25:28.411513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.180 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:41.556 [2024-11-15T09:25:30.019Z] ====================================== 00:06:41.556 [2024-11-15T09:25:30.019Z] busy:2301962864 (cyc) 00:06:41.556 [2024-11-15T09:25:30.019Z] total_run_count: 336000 00:06:41.556 [2024-11-15T09:25:30.019Z] tsc_hz: 2290000000 (cyc) 00:06:41.556 [2024-11-15T09:25:30.019Z] ====================================== 00:06:41.556 [2024-11-15T09:25:30.019Z] poller_cost: 6851 (cyc), 2991 (nsec) 00:06:41.556 00:06:41.556 real 0m1.637s 00:06:41.556 user 0m1.429s 00:06:41.556 sys 0m0.099s 00:06:41.556 09:25:29 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:41.556 09:25:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:41.556 ************************************ 00:06:41.556 END TEST thread_poller_perf 00:06:41.556 ************************************ 00:06:41.556 09:25:29 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:41.556 09:25:29 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:41.556 09:25:29 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:41.556 09:25:29 thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.556 ************************************ 00:06:41.556 START TEST thread_poller_perf 00:06:41.556 ************************************ 00:06:41.556 09:25:29 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:41.556 [2024-11-15 09:25:29.789427] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:41.556 [2024-11-15 09:25:29.789634] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59981 ] 00:06:41.556 [2024-11-15 09:25:29.964954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.814 [2024-11-15 09:25:30.098636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.814 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:43.189 [2024-11-15T09:25:31.652Z] ====================================== 00:06:43.189 [2024-11-15T09:25:31.652Z] busy:2297634442 (cyc) 00:06:43.189 [2024-11-15T09:25:31.652Z] total_run_count: 4354000 00:06:43.189 [2024-11-15T09:25:31.652Z] tsc_hz: 2290000000 (cyc) 00:06:43.189 [2024-11-15T09:25:31.652Z] ====================================== 00:06:43.189 [2024-11-15T09:25:31.652Z] poller_cost: 527 (cyc), 230 (nsec) 00:06:43.189 00:06:43.189 real 0m1.643s 00:06:43.189 user 0m1.417s 00:06:43.189 sys 0m0.117s 00:06:43.189 ************************************ 00:06:43.189 END TEST thread_poller_perf 00:06:43.189 ************************************ 00:06:43.189 09:25:31 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:43.189 09:25:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:43.189 09:25:31 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:43.189 ************************************ 00:06:43.189 END TEST thread 00:06:43.189 ************************************ 00:06:43.189 00:06:43.189 real 0m3.646s 00:06:43.189 user 0m3.047s 00:06:43.189 sys 0m0.394s 00:06:43.189 09:25:31 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:43.189 09:25:31 thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.189 09:25:31 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:43.189 09:25:31 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:43.189 09:25:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:43.189 09:25:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:43.189 09:25:31 -- common/autotest_common.sh@10 -- # set +x 00:06:43.189 ************************************ 00:06:43.189 START TEST app_cmdline 00:06:43.189 ************************************ 00:06:43.189 09:25:31 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:43.189 * Looking for test storage... 00:06:43.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:43.189 09:25:31 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:43.189 09:25:31 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:43.189 09:25:31 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:43.447 09:25:31 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.447 09:25:31 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:43.447 09:25:31 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.447 09:25:31 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:43.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.447 --rc genhtml_branch_coverage=1 00:06:43.447 --rc genhtml_function_coverage=1 00:06:43.447 --rc genhtml_legend=1 00:06:43.447 --rc geninfo_all_blocks=1 00:06:43.447 --rc geninfo_unexecuted_blocks=1 00:06:43.447 00:06:43.447 ' 00:06:43.447 09:25:31 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:43.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.447 --rc genhtml_branch_coverage=1 00:06:43.447 --rc genhtml_function_coverage=1 00:06:43.447 --rc genhtml_legend=1 00:06:43.447 --rc geninfo_all_blocks=1 00:06:43.447 --rc geninfo_unexecuted_blocks=1 00:06:43.447 00:06:43.447 ' 00:06:43.447 09:25:31 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:43.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.447 --rc genhtml_branch_coverage=1 00:06:43.447 --rc genhtml_function_coverage=1 00:06:43.447 --rc genhtml_legend=1 00:06:43.448 --rc geninfo_all_blocks=1 00:06:43.448 --rc geninfo_unexecuted_blocks=1 00:06:43.448 00:06:43.448 ' 00:06:43.448 09:25:31 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:43.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.448 --rc genhtml_branch_coverage=1 00:06:43.448 --rc genhtml_function_coverage=1 00:06:43.448 --rc genhtml_legend=1 00:06:43.448 --rc geninfo_all_blocks=1 00:06:43.448 --rc geninfo_unexecuted_blocks=1 00:06:43.448 00:06:43.448 ' 00:06:43.448 09:25:31 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:43.448 09:25:31 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60070 00:06:43.448 09:25:31 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:43.448 09:25:31 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60070 00:06:43.448 09:25:31 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 60070 ']' 00:06:43.448 09:25:31 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.448 09:25:31 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:43.448 09:25:31 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.448 09:25:31 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:43.448 09:25:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:43.448 [2024-11-15 09:25:31.814650] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:43.448 [2024-11-15 09:25:31.814925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60070 ] 00:06:43.706 [2024-11-15 09:25:31.999635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.706 [2024-11-15 09:25:32.159808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.086 09:25:33 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:45.086 09:25:33 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:06:45.086 09:25:33 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:45.086 { 00:06:45.086 "version": "SPDK v25.01-pre git sha1 318515b44", 00:06:45.086 "fields": { 00:06:45.086 "major": 25, 00:06:45.086 "minor": 1, 00:06:45.086 "patch": 0, 00:06:45.086 "suffix": "-pre", 00:06:45.086 "commit": "318515b44" 00:06:45.086 } 00:06:45.086 } 00:06:45.086 09:25:33 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:45.086 09:25:33 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:45.086 09:25:33 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:45.086 09:25:33 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:45.086 09:25:33 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:45.086 09:25:33 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:45.086 09:25:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:45.086 09:25:33 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:45.086 09:25:33 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:45.086 09:25:33 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.086 09:25:33 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:45.086 09:25:33 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:45.086 09:25:33 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:45.086 09:25:33 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:45.086 09:25:33 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:45.086 09:25:33 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:45.086 09:25:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.086 09:25:33 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:45.086 09:25:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.086 09:25:33 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:45.086 09:25:33 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.086 09:25:33 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:45.086 09:25:33 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:45.086 09:25:33 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:45.344 request: 00:06:45.344 { 00:06:45.344 "method": "env_dpdk_get_mem_stats", 00:06:45.344 "req_id": 1 00:06:45.344 } 00:06:45.344 Got JSON-RPC error response 00:06:45.344 response: 00:06:45.344 { 00:06:45.344 "code": -32601, 00:06:45.344 "message": "Method not found" 00:06:45.344 } 00:06:45.344 09:25:33 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:45.344 09:25:33 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:45.344 09:25:33 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:45.344 09:25:33 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:45.344 09:25:33 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60070 00:06:45.344 09:25:33 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 60070 ']' 00:06:45.344 09:25:33 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 60070 00:06:45.344 09:25:33 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:06:45.344 09:25:33 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:45.344 09:25:33 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60070 00:06:45.344 killing process with pid 60070 00:06:45.344 09:25:33 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:45.344 09:25:33 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:45.344 09:25:33 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60070' 00:06:45.344 09:25:33 app_cmdline -- common/autotest_common.sh@971 -- # kill 60070 00:06:45.344 09:25:33 app_cmdline -- common/autotest_common.sh@976 -- # wait 60070 00:06:48.625 00:06:48.625 real 0m4.985s 00:06:48.625 user 0m5.307s 00:06:48.625 sys 0m0.669s 00:06:48.625 09:25:36 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:48.625 ************************************ 00:06:48.625 09:25:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:48.625 END TEST app_cmdline 00:06:48.625 ************************************ 00:06:48.625 09:25:36 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:48.625 09:25:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:48.625 09:25:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:48.625 09:25:36 -- common/autotest_common.sh@10 -- # set +x 00:06:48.625 ************************************ 00:06:48.625 START TEST version 00:06:48.625 ************************************ 00:06:48.625 09:25:36 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:48.625 * Looking for test storage... 00:06:48.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:48.625 09:25:36 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:48.625 09:25:36 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:48.625 09:25:36 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:48.625 09:25:36 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:48.625 09:25:36 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.625 09:25:36 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.625 09:25:36 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.625 09:25:36 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.625 09:25:36 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.625 09:25:36 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.625 09:25:36 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.625 09:25:36 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.625 09:25:36 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.625 09:25:36 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.625 09:25:36 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.625 09:25:36 version -- scripts/common.sh@344 -- # case "$op" in 00:06:48.625 09:25:36 version -- scripts/common.sh@345 -- # : 1 00:06:48.625 09:25:36 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.625 09:25:36 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.625 09:25:36 version -- scripts/common.sh@365 -- # decimal 1 00:06:48.625 09:25:36 version -- scripts/common.sh@353 -- # local d=1 00:06:48.625 09:25:36 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.625 09:25:36 version -- scripts/common.sh@355 -- # echo 1 00:06:48.625 09:25:36 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.625 09:25:36 version -- scripts/common.sh@366 -- # decimal 2 00:06:48.625 09:25:36 version -- scripts/common.sh@353 -- # local d=2 00:06:48.625 09:25:36 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.625 09:25:36 version -- scripts/common.sh@355 -- # echo 2 00:06:48.625 09:25:36 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.625 09:25:36 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.625 09:25:36 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.625 09:25:36 version -- scripts/common.sh@368 -- # return 0 00:06:48.625 09:25:36 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.625 09:25:36 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:48.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.625 --rc genhtml_branch_coverage=1 00:06:48.625 --rc genhtml_function_coverage=1 00:06:48.625 --rc genhtml_legend=1 00:06:48.625 --rc geninfo_all_blocks=1 00:06:48.625 --rc geninfo_unexecuted_blocks=1 00:06:48.625 00:06:48.625 ' 00:06:48.625 09:25:36 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:48.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.625 --rc genhtml_branch_coverage=1 00:06:48.625 --rc genhtml_function_coverage=1 00:06:48.626 --rc genhtml_legend=1 00:06:48.626 --rc geninfo_all_blocks=1 00:06:48.626 --rc geninfo_unexecuted_blocks=1 00:06:48.626 00:06:48.626 ' 00:06:48.626 09:25:36 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:48.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.626 --rc genhtml_branch_coverage=1 00:06:48.626 --rc genhtml_function_coverage=1 00:06:48.626 --rc genhtml_legend=1 00:06:48.626 --rc geninfo_all_blocks=1 00:06:48.626 --rc geninfo_unexecuted_blocks=1 00:06:48.626 00:06:48.626 ' 00:06:48.626 09:25:36 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:48.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.626 --rc genhtml_branch_coverage=1 00:06:48.626 --rc genhtml_function_coverage=1 00:06:48.626 --rc genhtml_legend=1 00:06:48.626 --rc geninfo_all_blocks=1 00:06:48.626 --rc geninfo_unexecuted_blocks=1 00:06:48.626 00:06:48.626 ' 00:06:48.626 09:25:36 version -- app/version.sh@17 -- # get_header_version major 00:06:48.626 09:25:36 version -- app/version.sh@14 -- # cut -f2 00:06:48.626 09:25:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:48.626 09:25:36 version -- app/version.sh@14 -- # tr -d '"' 00:06:48.626 09:25:36 version -- app/version.sh@17 -- # major=25 00:06:48.626 09:25:36 version -- app/version.sh@18 -- # get_header_version minor 00:06:48.626 09:25:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:48.626 09:25:36 version -- app/version.sh@14 -- # cut -f2 00:06:48.626 09:25:36 version -- app/version.sh@14 -- # tr -d '"' 00:06:48.626 09:25:36 version -- app/version.sh@18 -- # minor=1 00:06:48.626 09:25:36 version -- app/version.sh@19 -- # get_header_version patch 00:06:48.626 09:25:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:48.626 09:25:36 version -- app/version.sh@14 -- # cut -f2 00:06:48.626 09:25:36 version -- app/version.sh@14 -- # tr -d '"' 00:06:48.626 09:25:36 version -- app/version.sh@19 -- # patch=0 00:06:48.626 09:25:36 version -- app/version.sh@20 -- # get_header_version suffix 00:06:48.626 09:25:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:48.626 09:25:36 version -- app/version.sh@14 -- # cut -f2 00:06:48.626 09:25:36 version -- app/version.sh@14 -- # tr -d '"' 00:06:48.626 09:25:36 version -- app/version.sh@20 -- # suffix=-pre 00:06:48.626 09:25:36 version -- app/version.sh@22 -- # version=25.1 00:06:48.626 09:25:36 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:48.626 09:25:36 version -- app/version.sh@28 -- # version=25.1rc0 00:06:48.626 09:25:36 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:48.626 09:25:36 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:48.626 09:25:36 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:48.626 09:25:36 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:48.626 00:06:48.626 real 0m0.341s 00:06:48.626 user 0m0.206s 00:06:48.626 sys 0m0.189s 00:06:48.626 09:25:36 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:48.626 09:25:36 version -- common/autotest_common.sh@10 -- # set +x 00:06:48.626 ************************************ 00:06:48.626 END TEST version 00:06:48.626 ************************************ 00:06:48.626 09:25:36 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:48.626 09:25:36 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:48.626 09:25:36 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:48.626 09:25:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:48.626 09:25:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:48.626 09:25:36 -- common/autotest_common.sh@10 -- # set +x 00:06:48.626 ************************************ 00:06:48.626 START TEST bdev_raid 00:06:48.626 ************************************ 00:06:48.626 09:25:36 bdev_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:48.626 * Looking for test storage... 00:06:48.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:48.626 09:25:37 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:48.626 09:25:37 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:06:48.626 09:25:37 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:48.886 09:25:37 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.886 09:25:37 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:48.886 09:25:37 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.886 09:25:37 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:48.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.886 --rc genhtml_branch_coverage=1 00:06:48.886 --rc genhtml_function_coverage=1 00:06:48.886 --rc genhtml_legend=1 00:06:48.886 --rc geninfo_all_blocks=1 00:06:48.886 --rc geninfo_unexecuted_blocks=1 00:06:48.886 00:06:48.886 ' 00:06:48.886 09:25:37 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:48.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.886 --rc genhtml_branch_coverage=1 00:06:48.886 --rc genhtml_function_coverage=1 00:06:48.886 --rc genhtml_legend=1 00:06:48.886 --rc geninfo_all_blocks=1 00:06:48.886 --rc geninfo_unexecuted_blocks=1 00:06:48.886 00:06:48.886 ' 00:06:48.886 09:25:37 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:48.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.886 --rc genhtml_branch_coverage=1 00:06:48.886 --rc genhtml_function_coverage=1 00:06:48.886 --rc genhtml_legend=1 00:06:48.886 --rc geninfo_all_blocks=1 00:06:48.886 --rc geninfo_unexecuted_blocks=1 00:06:48.886 00:06:48.886 ' 00:06:48.886 09:25:37 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:48.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.886 --rc genhtml_branch_coverage=1 00:06:48.886 --rc genhtml_function_coverage=1 00:06:48.886 --rc genhtml_legend=1 00:06:48.886 --rc geninfo_all_blocks=1 00:06:48.886 --rc geninfo_unexecuted_blocks=1 00:06:48.886 00:06:48.886 ' 00:06:48.886 09:25:37 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:48.886 09:25:37 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:48.886 09:25:37 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:48.886 09:25:37 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:48.886 09:25:37 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:48.886 09:25:37 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:48.886 09:25:37 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:48.886 09:25:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:48.886 09:25:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:48.886 09:25:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:48.886 ************************************ 00:06:48.886 START TEST raid1_resize_data_offset_test 00:06:48.886 ************************************ 00:06:48.886 Process raid pid: 60263 00:06:48.886 09:25:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1127 -- # raid_resize_data_offset_test 00:06:48.886 09:25:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60263 00:06:48.886 09:25:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60263' 00:06:48.886 09:25:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:48.886 09:25:37 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60263 00:06:48.886 09:25:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@833 -- # '[' -z 60263 ']' 00:06:48.886 09:25:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.886 09:25:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:48.886 09:25:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.886 09:25:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:48.886 09:25:37 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.886 [2024-11-15 09:25:37.288220] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:48.886 [2024-11-15 09:25:37.288451] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.143 [2024-11-15 09:25:37.471318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.143 [2024-11-15 09:25:37.600915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.401 [2024-11-15 09:25:37.824574] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.401 [2024-11-15 09:25:37.824739] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.969 09:25:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:49.969 09:25:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@866 -- # return 0 00:06:49.969 09:25:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:49.969 09:25:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.969 09:25:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.969 malloc0 00:06:49.969 09:25:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.969 09:25:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:49.969 09:25:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.969 09:25:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.969 malloc1 00:06:49.969 09:25:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.969 09:25:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:49.969 09:25:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.969 09:25:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.969 null0 00:06:49.969 09:25:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.969 09:25:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:49.969 09:25:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.969 09:25:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.969 [2024-11-15 09:25:38.315979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:49.970 [2024-11-15 09:25:38.318156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:49.970 [2024-11-15 09:25:38.318210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:49.970 [2024-11-15 09:25:38.318394] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:49.970 [2024-11-15 09:25:38.318420] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:49.970 [2024-11-15 09:25:38.318732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:49.970 [2024-11-15 09:25:38.318939] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:49.970 [2024-11-15 09:25:38.318964] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:49.970 [2024-11-15 09:25:38.319154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.970 09:25:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.970 09:25:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:49.970 09:25:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:49.970 09:25:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.970 09:25:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.970 09:25:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.970 09:25:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:49.970 09:25:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:49.970 09:25:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.970 09:25:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.970 [2024-11-15 09:25:38.375932] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:49.970 09:25:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.970 09:25:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:49.970 09:25:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.970 09:25:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.537 malloc2 00:06:50.537 09:25:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.537 09:25:38 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:50.537 09:25:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.537 09:25:38 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.537 [2024-11-15 09:25:38.983333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:50.796 [2024-11-15 09:25:39.002826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:50.796 09:25:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.796 [2024-11-15 09:25:39.004988] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:50.796 09:25:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:50.796 09:25:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:50.796 09:25:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.796 09:25:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.796 09:25:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.796 09:25:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:50.796 09:25:39 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60263 00:06:50.796 09:25:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@952 -- # '[' -z 60263 ']' 00:06:50.796 09:25:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # kill -0 60263 00:06:50.796 09:25:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # uname 00:06:50.796 09:25:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:50.796 09:25:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60263 00:06:50.796 killing process with pid 60263 00:06:50.796 09:25:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:50.796 09:25:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:50.796 09:25:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60263' 00:06:50.796 09:25:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@971 -- # kill 60263 00:06:50.796 [2024-11-15 09:25:39.100072] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:50.796 09:25:39 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@976 -- # wait 60263 00:06:50.796 [2024-11-15 09:25:39.100243] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:50.796 [2024-11-15 09:25:39.100296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:50.796 [2024-11-15 09:25:39.100315] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:50.796 [2024-11-15 09:25:39.141272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:50.796 [2024-11-15 09:25:39.141644] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:50.796 [2024-11-15 09:25:39.141665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:52.698 [2024-11-15 09:25:41.076739] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:54.074 09:25:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:54.074 00:06:54.074 real 0m5.053s 00:06:54.074 user 0m4.958s 00:06:54.074 sys 0m0.561s 00:06:54.074 09:25:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:54.074 09:25:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.074 ************************************ 00:06:54.074 END TEST raid1_resize_data_offset_test 00:06:54.074 ************************************ 00:06:54.074 09:25:42 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:54.074 09:25:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:54.074 09:25:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:54.074 09:25:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:54.074 ************************************ 00:06:54.074 START TEST raid0_resize_superblock_test 00:06:54.074 ************************************ 00:06:54.074 Process raid pid: 60352 00:06:54.075 09:25:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 0 00:06:54.075 09:25:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:54.075 09:25:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60352 00:06:54.075 09:25:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:54.075 09:25:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60352' 00:06:54.075 09:25:42 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60352 00:06:54.075 09:25:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60352 ']' 00:06:54.075 09:25:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.075 09:25:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:54.075 09:25:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.075 09:25:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:54.075 09:25:42 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.075 [2024-11-15 09:25:42.408821] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:54.075 [2024-11-15 09:25:42.409071] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.334 [2024-11-15 09:25:42.573915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.334 [2024-11-15 09:25:42.696154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.594 [2024-11-15 09:25:42.920002] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.594 [2024-11-15 09:25:42.920130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.854 09:25:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:54.854 09:25:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:06:54.854 09:25:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:54.854 09:25:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.854 09:25:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.421 malloc0 00:06:55.421 09:25:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.421 09:25:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:55.421 09:25:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.421 09:25:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.421 [2024-11-15 09:25:43.863212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:55.421 [2024-11-15 09:25:43.863408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:55.421 [2024-11-15 09:25:43.863469] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:55.421 [2024-11-15 09:25:43.863513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:55.421 [2024-11-15 09:25:43.866207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:55.421 [2024-11-15 09:25:43.866328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:55.421 pt0 00:06:55.421 09:25:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.421 09:25:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:55.421 09:25:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.421 09:25:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.679 328c8816-a96a-47fc-bf02-92a09d34a7d6 00:06:55.679 09:25:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.679 09:25:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:55.679 09:25:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.679 09:25:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.679 8cf6bf42-4213-4389-8435-2e80e2b16bbf 00:06:55.679 09:25:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.679 09:25:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:55.679 09:25:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.679 09:25:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.679 588adf1b-e584-4c97-8096-b63bbb684dab 00:06:55.679 09:25:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.679 09:25:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:55.679 09:25:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:55.679 09:25:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.679 09:25:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.679 [2024-11-15 09:25:43.992300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8cf6bf42-4213-4389-8435-2e80e2b16bbf is claimed 00:06:55.679 [2024-11-15 09:25:43.992466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 588adf1b-e584-4c97-8096-b63bbb684dab is claimed 00:06:55.679 [2024-11-15 09:25:43.992641] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:55.679 [2024-11-15 09:25:43.992661] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:55.679 [2024-11-15 09:25:43.992994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:55.679 [2024-11-15 09:25:43.993221] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:55.679 [2024-11-15 09:25:43.993246] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:55.679 [2024-11-15 09:25:43.993449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:55.679 09:25:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.679 09:25:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:55.679 09:25:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:55.679 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.679 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.679 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.679 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:55.679 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:55.679 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:55.679 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.679 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.679 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.679 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:55.679 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:55.679 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:55.679 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.679 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:55.679 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:55.679 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.679 [2024-11-15 09:25:44.104439] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.679 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.938 [2024-11-15 09:25:44.152414] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:55.938 [2024-11-15 09:25:44.152577] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '8cf6bf42-4213-4389-8435-2e80e2b16bbf' was resized: old size 131072, new size 204800 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.938 [2024-11-15 09:25:44.160246] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:55.938 [2024-11-15 09:25:44.160359] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '588adf1b-e584-4c97-8096-b63bbb684dab' was resized: old size 131072, new size 204800 00:06:55.938 [2024-11-15 09:25:44.160532] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.938 [2024-11-15 09:25:44.264085] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:55.938 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.939 [2024-11-15 09:25:44.291793] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:55.939 [2024-11-15 09:25:44.291955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:55.939 [2024-11-15 09:25:44.291974] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:55.939 [2024-11-15 09:25:44.291993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:55.939 [2024-11-15 09:25:44.292108] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:55.939 [2024-11-15 09:25:44.292142] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:55.939 [2024-11-15 09:25:44.292155] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.939 [2024-11-15 09:25:44.299688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:55.939 [2024-11-15 09:25:44.299796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:55.939 [2024-11-15 09:25:44.299822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:55.939 [2024-11-15 09:25:44.299834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:55.939 [2024-11-15 09:25:44.302153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:55.939 [2024-11-15 09:25:44.302194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:55.939 [2024-11-15 09:25:44.303840] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 8cf6bf42-4213-4389-8435-2e80e2b16bbf 00:06:55.939 [2024-11-15 09:25:44.303935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8cf6bf42-4213-4389-8435-2e80e2b16bbf is claimed 00:06:55.939 [2024-11-15 09:25:44.304062] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 588adf1b-e584-4c97-8096-b63bbb684dab 00:06:55.939 [2024-11-15 09:25:44.304083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 588adf1b-e584-4c97-8096-b63bbb684dab is claimed 00:06:55.939 [2024-11-15 09:25:44.304239] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 588adf1b-e584-4c97-8096-b63bbb684dab (2) smaller than existing raid bdev Raid (3) 00:06:55.939 [2024-11-15 09:25:44.304265] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 8cf6bf42-4213-4389-8435-2e80e2b16bbf: File exists 00:06:55.939 [2024-11-15 09:25:44.304302] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:55.939 [2024-11-15 09:25:44.304314] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:55.939 pt0 00:06:55.939 [2024-11-15 09:25:44.304597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:55.939 [2024-11-15 09:25:44.304776] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:55.939 [2024-11-15 09:25:44.304797] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.939 [2024-11-15 09:25:44.305001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.939 [2024-11-15 09:25:44.320159] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60352 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60352 ']' 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60352 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60352 00:06:55.939 killing process with pid 60352 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60352' 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 60352 00:06:55.939 09:25:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 60352 00:06:55.939 [2024-11-15 09:25:44.397817] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:55.939 [2024-11-15 09:25:44.397926] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:55.939 [2024-11-15 09:25:44.397986] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:55.939 [2024-11-15 09:25:44.397997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:57.908 [2024-11-15 09:25:46.015326] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:58.845 ************************************ 00:06:58.845 END TEST raid0_resize_superblock_test 00:06:58.845 ************************************ 00:06:58.845 09:25:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:58.845 00:06:58.845 real 0m4.933s 00:06:58.845 user 0m5.104s 00:06:58.845 sys 0m0.603s 00:06:58.845 09:25:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:58.845 09:25:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.845 09:25:47 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:58.845 09:25:47 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:58.845 09:25:47 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:58.845 09:25:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:58.845 ************************************ 00:06:58.845 START TEST raid1_resize_superblock_test 00:06:58.845 ************************************ 00:06:58.845 09:25:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 1 00:06:58.845 09:25:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:58.845 Process raid pid: 60456 00:06:58.845 09:25:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60456 00:06:58.845 09:25:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60456' 00:06:58.845 09:25:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60456 00:06:58.845 09:25:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60456 ']' 00:06:58.845 09:25:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.845 09:25:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:58.845 09:25:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.845 09:25:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:58.845 09:25:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:58.845 09:25:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.104 [2024-11-15 09:25:47.405707] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:06:59.104 [2024-11-15 09:25:47.405877] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.363 [2024-11-15 09:25:47.574148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.363 [2024-11-15 09:25:47.702864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.622 [2024-11-15 09:25:47.924210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.622 [2024-11-15 09:25:47.924254] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.947 09:25:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:59.947 09:25:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:06:59.947 09:25:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:59.947 09:25:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.947 09:25:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.513 malloc0 00:07:00.513 09:25:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.513 09:25:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:00.513 09:25:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.513 09:25:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.513 [2024-11-15 09:25:48.885989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:00.513 [2024-11-15 09:25:48.886070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:00.513 [2024-11-15 09:25:48.886093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:00.513 [2024-11-15 09:25:48.886105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:00.513 [2024-11-15 09:25:48.888246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:00.513 [2024-11-15 09:25:48.888291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:00.513 pt0 00:07:00.513 09:25:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.513 09:25:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:00.513 09:25:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.513 09:25:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.771 b86d9dfe-0b1c-433f-9cd3-ecaa3822ca69 00:07:00.771 09:25:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.771 09:25:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:00.771 09:25:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.771 09:25:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.771 003f2b40-0151-4ef9-aafe-02b83a643f74 00:07:00.771 09:25:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.771 09:25:48 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:00.771 09:25:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.771 09:25:48 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.771 afd5b348-6994-49da-9ea6-41b82ad9a18e 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.771 [2024-11-15 09:25:49.010515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 003f2b40-0151-4ef9-aafe-02b83a643f74 is claimed 00:07:00.771 [2024-11-15 09:25:49.010761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev afd5b348-6994-49da-9ea6-41b82ad9a18e is claimed 00:07:00.771 [2024-11-15 09:25:49.011032] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:00.771 [2024-11-15 09:25:49.011099] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:00.771 [2024-11-15 09:25:49.011430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:00.771 [2024-11-15 09:25:49.011674] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:00.771 [2024-11-15 09:25:49.011689] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:00.771 [2024-11-15 09:25:49.011932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.771 [2024-11-15 09:25:49.126655] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.771 [2024-11-15 09:25:49.162533] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:00.771 [2024-11-15 09:25:49.162580] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '003f2b40-0151-4ef9-aafe-02b83a643f74' was resized: old size 131072, new size 204800 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.771 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.771 [2024-11-15 09:25:49.170411] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:00.771 [2024-11-15 09:25:49.170449] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'afd5b348-6994-49da-9ea6-41b82ad9a18e' was resized: old size 131072, new size 204800 00:07:00.771 [2024-11-15 09:25:49.170482] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:00.772 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.772 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:00.772 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:00.772 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.772 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.772 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.772 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:00.772 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:00.772 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:00.772 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.772 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:01.034 [2024-11-15 09:25:49.266292] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.034 [2024-11-15 09:25:49.314055] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:01.034 [2024-11-15 09:25:49.314232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:01.034 [2024-11-15 09:25:49.314281] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:01.034 [2024-11-15 09:25:49.314474] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:01.034 [2024-11-15 09:25:49.314694] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.034 [2024-11-15 09:25:49.314759] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:01.034 [2024-11-15 09:25:49.314773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.034 [2024-11-15 09:25:49.321945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:01.034 [2024-11-15 09:25:49.322016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:01.034 [2024-11-15 09:25:49.322045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:01.034 [2024-11-15 09:25:49.322065] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:01.034 [2024-11-15 09:25:49.325088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:01.034 [2024-11-15 09:25:49.325143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:01.034 pt0 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.034 [2024-11-15 09:25:49.327078] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 003f2b40-0151-4ef9-aafe-02b83a643f74 00:07:01.034 [2024-11-15 09:25:49.327159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 003f2b40-0151-4ef9-aafe-02b83a643f74 is claimed 00:07:01.034 [2024-11-15 09:25:49.327414] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev afd5b348-6994-49da-9ea6-41b82ad9a18e 00:07:01.034 [2024-11-15 09:25:49.327443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev afd5b348-6994-49da-9ea6-41b82ad9a18e is claimed 00:07:01.034 [2024-11-15 09:25:49.327602] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev afd5b348-6994-49da-9ea6-41b82ad9a18e (2) smaller than existing raid bdev Raid (3) 00:07:01.034 [2024-11-15 09:25:49.327628] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 003f2b40-0151-4ef9-aafe-02b83a643f74: File exists 00:07:01.034 [2024-11-15 09:25:49.327677] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:01.034 [2024-11-15 09:25:49.327692] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:01.034 [2024-11-15 09:25:49.328031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:01.034 [2024-11-15 09:25:49.328220] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:01.034 [2024-11-15 09:25:49.328231] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:01.034 [2024-11-15 09:25:49.328415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:01.034 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:01.035 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:01.035 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:01.035 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.035 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.035 [2024-11-15 09:25:49.346253] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.035 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.035 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:01.035 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:01.035 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:01.035 09:25:49 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60456 00:07:01.035 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60456 ']' 00:07:01.035 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60456 00:07:01.035 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:07:01.035 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:01.035 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60456 00:07:01.035 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:01.035 killing process with pid 60456 00:07:01.035 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:01.035 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60456' 00:07:01.035 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 60456 00:07:01.035 09:25:49 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 60456 00:07:01.035 [2024-11-15 09:25:49.428109] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:01.035 [2024-11-15 09:25:49.428212] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.035 [2024-11-15 09:25:49.428396] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:01.035 [2024-11-15 09:25:49.428418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:02.948 [2024-11-15 09:25:50.998253] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:03.886 ************************************ 00:07:03.886 END TEST raid1_resize_superblock_test 00:07:03.886 ************************************ 00:07:03.886 09:25:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:03.886 00:07:03.886 real 0m4.912s 00:07:03.886 user 0m5.084s 00:07:03.886 sys 0m0.615s 00:07:03.886 09:25:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:03.886 09:25:52 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.886 09:25:52 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:03.886 09:25:52 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:03.886 09:25:52 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:03.886 09:25:52 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:03.886 09:25:52 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:03.886 09:25:52 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:03.886 09:25:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:03.886 09:25:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:03.886 09:25:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:03.886 ************************************ 00:07:03.886 START TEST raid_function_test_raid0 00:07:03.886 ************************************ 00:07:03.886 09:25:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1127 -- # raid_function_test raid0 00:07:03.886 09:25:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:03.886 09:25:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:03.886 09:25:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:03.886 Process raid pid: 60564 00:07:03.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.886 09:25:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60564 00:07:03.886 09:25:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60564' 00:07:03.886 09:25:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60564 00:07:03.886 09:25:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # '[' -z 60564 ']' 00:07:03.886 09:25:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.886 09:25:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:03.886 09:25:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.886 09:25:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:03.886 09:25:52 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:03.886 09:25:52 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:04.145 [2024-11-15 09:25:52.400222] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:07:04.145 [2024-11-15 09:25:52.400378] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.145 [2024-11-15 09:25:52.570974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.404 [2024-11-15 09:25:52.703712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.663 [2024-11-15 09:25:52.942295] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.663 [2024-11-15 09:25:52.942347] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.922 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:04.922 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # return 0 00:07:04.922 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:04.922 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.922 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:04.922 Base_1 00:07:04.922 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.922 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:04.922 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.922 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:04.922 Base_2 00:07:04.922 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.922 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:04.922 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.922 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:04.922 [2024-11-15 09:25:53.348212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:04.922 [2024-11-15 09:25:53.350200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:04.922 [2024-11-15 09:25:53.350270] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:04.922 [2024-11-15 09:25:53.350282] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:04.922 [2024-11-15 09:25:53.350538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:04.922 [2024-11-15 09:25:53.350692] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:04.922 [2024-11-15 09:25:53.350701] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:04.922 [2024-11-15 09:25:53.350841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.922 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.922 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:04.922 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:04.922 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.922 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:04.922 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:05.182 [2024-11-15 09:25:53.575928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:05.182 /dev/nbd0 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # local i 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # break 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.182 1+0 records in 00:07:05.182 1+0 records out 00:07:05.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406937 s, 10.1 MB/s 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # size=4096 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # return 0 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:05.182 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:05.443 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:05.443 { 00:07:05.443 "nbd_device": "/dev/nbd0", 00:07:05.443 "bdev_name": "raid" 00:07:05.443 } 00:07:05.443 ]' 00:07:05.443 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.443 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:05.443 { 00:07:05.443 "nbd_device": "/dev/nbd0", 00:07:05.443 "bdev_name": "raid" 00:07:05.443 } 00:07:05.443 ]' 00:07:05.443 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:05.443 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:05.443 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.443 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:05.443 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:05.443 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:05.443 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:05.443 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:05.443 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:05.443 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:05.443 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:05.702 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:05.702 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:05.702 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:05.702 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:05.702 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:05.702 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:05.702 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:05.702 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:05.702 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:05.702 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:05.702 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:05.702 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:05.702 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:05.702 4096+0 records in 00:07:05.702 4096+0 records out 00:07:05.702 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0370587 s, 56.6 MB/s 00:07:05.702 09:25:53 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:05.963 4096+0 records in 00:07:05.963 4096+0 records out 00:07:05.963 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.237961 s, 8.8 MB/s 00:07:05.963 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:05.963 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:05.963 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:05.963 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:05.963 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:05.963 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:05.963 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:05.963 128+0 records in 00:07:05.963 128+0 records out 00:07:05.963 65536 bytes (66 kB, 64 KiB) copied, 0.00140465 s, 46.7 MB/s 00:07:05.963 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:05.963 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:05.963 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:05.963 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:05.963 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:05.963 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:05.963 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:05.963 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:05.963 2035+0 records in 00:07:05.963 2035+0 records out 00:07:05.963 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0153131 s, 68.0 MB/s 00:07:05.963 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:05.963 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:05.964 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:05.964 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:05.964 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:05.964 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:05.964 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:05.964 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:05.964 456+0 records in 00:07:05.964 456+0 records out 00:07:05.964 233472 bytes (233 kB, 228 KiB) copied, 0.00396463 s, 58.9 MB/s 00:07:05.964 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:05.964 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:05.964 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:05.964 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:05.964 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:05.964 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:05.964 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:05.964 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:05.964 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:05.964 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:05.964 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:05.964 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.964 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:06.223 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.223 [2024-11-15 09:25:54.557914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:06.223 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.223 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.223 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.223 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.223 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.223 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:06.223 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.223 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:06.223 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:06.223 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:06.481 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:06.481 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:06.481 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.481 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:06.481 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:06.481 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.481 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:06.481 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:06.481 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:06.481 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:06.481 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:06.481 09:25:54 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60564 00:07:06.481 09:25:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # '[' -z 60564 ']' 00:07:06.481 09:25:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # kill -0 60564 00:07:06.481 09:25:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # uname 00:07:06.481 09:25:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:06.481 09:25:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60564 00:07:06.481 killing process with pid 60564 00:07:06.481 09:25:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:06.481 09:25:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:06.481 09:25:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60564' 00:07:06.481 09:25:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@971 -- # kill 60564 00:07:06.481 [2024-11-15 09:25:54.920557] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:06.481 09:25:54 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@976 -- # wait 60564 00:07:06.481 [2024-11-15 09:25:54.920696] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:06.481 [2024-11-15 09:25:54.920761] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:06.481 [2024-11-15 09:25:54.920780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:06.739 [2024-11-15 09:25:55.156393] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:08.117 09:25:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:08.117 00:07:08.117 real 0m4.086s 00:07:08.117 user 0m4.748s 00:07:08.117 sys 0m0.975s 00:07:08.117 09:25:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:08.117 09:25:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:08.117 ************************************ 00:07:08.117 END TEST raid_function_test_raid0 00:07:08.117 ************************************ 00:07:08.117 09:25:56 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:08.117 09:25:56 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:08.117 09:25:56 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:08.117 09:25:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:08.117 ************************************ 00:07:08.117 START TEST raid_function_test_concat 00:07:08.117 ************************************ 00:07:08.117 09:25:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1127 -- # raid_function_test concat 00:07:08.117 09:25:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:08.117 09:25:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:08.117 09:25:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:08.117 09:25:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60688 00:07:08.117 09:25:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60688' 00:07:08.117 09:25:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:08.117 Process raid pid: 60688 00:07:08.117 09:25:56 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60688 00:07:08.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.117 09:25:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # '[' -z 60688 ']' 00:07:08.117 09:25:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.117 09:25:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:08.117 09:25:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.117 09:25:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:08.117 09:25:56 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:08.117 [2024-11-15 09:25:56.558801] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:07:08.117 [2024-11-15 09:25:56.558945] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.377 [2024-11-15 09:25:56.735702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.636 [2024-11-15 09:25:56.875971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.895 [2024-11-15 09:25:57.122735] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.895 [2024-11-15 09:25:57.122794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # return 0 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:09.156 Base_1 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:09.156 Base_2 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:09.156 [2024-11-15 09:25:57.546797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:09.156 [2024-11-15 09:25:57.549096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:09.156 [2024-11-15 09:25:57.549238] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:09.156 [2024-11-15 09:25:57.549255] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:09.156 [2024-11-15 09:25:57.549535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:09.156 [2024-11-15 09:25:57.549696] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:09.156 [2024-11-15 09:25:57.549706] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:09.156 [2024-11-15 09:25:57.549889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:09.156 09:25:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:09.417 [2024-11-15 09:25:57.806471] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:09.417 /dev/nbd0 00:07:09.417 09:25:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:09.417 09:25:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:09.417 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:09.417 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # local i 00:07:09.417 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:09.417 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:09.417 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:09.417 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # break 00:07:09.417 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:09.417 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:09.417 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.417 1+0 records in 00:07:09.417 1+0 records out 00:07:09.417 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105693 s, 3.9 MB/s 00:07:09.417 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.417 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # size=4096 00:07:09.417 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.417 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:09.417 09:25:57 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # return 0 00:07:09.417 09:25:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.417 09:25:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:09.417 09:25:57 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:09.417 09:25:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:09.676 09:25:57 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:09.935 { 00:07:09.935 "nbd_device": "/dev/nbd0", 00:07:09.935 "bdev_name": "raid" 00:07:09.935 } 00:07:09.935 ]' 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:09.935 { 00:07:09.935 "nbd_device": "/dev/nbd0", 00:07:09.935 "bdev_name": "raid" 00:07:09.935 } 00:07:09.935 ]' 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:09.935 4096+0 records in 00:07:09.935 4096+0 records out 00:07:09.935 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0275929 s, 76.0 MB/s 00:07:09.935 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:10.195 4096+0 records in 00:07:10.195 4096+0 records out 00:07:10.195 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.246666 s, 8.5 MB/s 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:10.195 128+0 records in 00:07:10.195 128+0 records out 00:07:10.195 65536 bytes (66 kB, 64 KiB) copied, 0.00217269 s, 30.2 MB/s 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:10.195 2035+0 records in 00:07:10.195 2035+0 records out 00:07:10.195 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00750416 s, 139 MB/s 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:10.195 456+0 records in 00:07:10.195 456+0 records out 00:07:10.195 233472 bytes (233 kB, 228 KiB) copied, 0.0041612 s, 56.1 MB/s 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.195 09:25:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:10.455 [2024-11-15 09:25:58.881058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.455 09:25:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:10.455 09:25:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:10.455 09:25:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:10.455 09:25:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.455 09:25:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.455 09:25:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:10.455 09:25:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:10.455 09:25:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.455 09:25:58 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:10.455 09:25:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:10.455 09:25:58 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:10.714 09:25:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:10.714 09:25:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:10.714 09:25:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:10.974 09:25:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:10.974 09:25:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:10.974 09:25:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:10.974 09:25:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:10.974 09:25:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:10.974 09:25:59 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:10.974 09:25:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:10.974 09:25:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:10.974 09:25:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60688 00:07:10.974 09:25:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # '[' -z 60688 ']' 00:07:10.974 09:25:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # kill -0 60688 00:07:10.974 09:25:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # uname 00:07:10.974 09:25:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:10.974 09:25:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60688 00:07:10.974 09:25:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:10.974 09:25:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:10.974 killing process with pid 60688 00:07:10.974 09:25:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60688' 00:07:10.974 09:25:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@971 -- # kill 60688 00:07:10.974 [2024-11-15 09:25:59.290476] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:10.974 [2024-11-15 09:25:59.290701] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:10.974 09:25:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@976 -- # wait 60688 00:07:10.974 [2024-11-15 09:25:59.290808] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:10.974 [2024-11-15 09:25:59.290831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:11.234 [2024-11-15 09:25:59.555250] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:12.615 09:26:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:12.615 00:07:12.615 real 0m4.483s 00:07:12.615 user 0m5.125s 00:07:12.615 sys 0m1.168s 00:07:12.615 09:26:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:12.615 09:26:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:12.615 ************************************ 00:07:12.615 END TEST raid_function_test_concat 00:07:12.615 ************************************ 00:07:12.615 09:26:01 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:12.615 09:26:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:12.615 09:26:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:12.615 09:26:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:12.615 ************************************ 00:07:12.615 START TEST raid0_resize_test 00:07:12.615 ************************************ 00:07:12.615 09:26:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 0 00:07:12.615 09:26:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:12.615 09:26:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:12.615 09:26:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:12.615 09:26:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:12.615 09:26:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:12.615 09:26:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:12.615 09:26:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:12.615 09:26:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:12.615 09:26:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60828 00:07:12.615 09:26:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:12.615 09:26:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60828' 00:07:12.615 Process raid pid: 60828 00:07:12.615 09:26:01 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60828 00:07:12.615 09:26:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60828 ']' 00:07:12.615 09:26:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.615 09:26:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:12.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.615 09:26:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.615 09:26:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:12.615 09:26:01 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.872 [2024-11-15 09:26:01.119668] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:07:12.872 [2024-11-15 09:26:01.119801] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.872 [2024-11-15 09:26:01.288949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.132 [2024-11-15 09:26:01.464829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.391 [2024-11-15 09:26:01.744025] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.391 [2024-11-15 09:26:01.744092] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.651 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:13.651 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@866 -- # return 0 00:07:13.651 09:26:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:13.651 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.651 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.651 Base_1 00:07:13.651 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.651 09:26:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:13.651 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.651 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.652 Base_2 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.652 [2024-11-15 09:26:02.026694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:13.652 [2024-11-15 09:26:02.029237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:13.652 [2024-11-15 09:26:02.029325] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:13.652 [2024-11-15 09:26:02.029340] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:13.652 [2024-11-15 09:26:02.029682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:13.652 [2024-11-15 09:26:02.029879] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:13.652 [2024-11-15 09:26:02.029899] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:13.652 [2024-11-15 09:26:02.030120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.652 [2024-11-15 09:26:02.034582] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:13.652 [2024-11-15 09:26:02.034635] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:13.652 true 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.652 [2024-11-15 09:26:02.050810] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.652 [2024-11-15 09:26:02.094583] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:13.652 [2024-11-15 09:26:02.094642] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:13.652 [2024-11-15 09:26:02.094688] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:13.652 true 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.652 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.652 [2024-11-15 09:26:02.110685] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:13.912 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.912 09:26:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:13.912 09:26:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:13.912 09:26:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:13.912 09:26:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:13.912 09:26:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:13.912 09:26:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60828 00:07:13.912 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60828 ']' 00:07:13.912 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # kill -0 60828 00:07:13.912 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # uname 00:07:13.912 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:13.912 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60828 00:07:13.912 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:13.912 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:13.912 killing process with pid 60828 00:07:13.912 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60828' 00:07:13.912 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@971 -- # kill 60828 00:07:13.912 [2024-11-15 09:26:02.188512] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:13.912 09:26:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@976 -- # wait 60828 00:07:13.912 [2024-11-15 09:26:02.188659] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:13.912 [2024-11-15 09:26:02.188725] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:13.912 [2024-11-15 09:26:02.188737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:13.912 [2024-11-15 09:26:02.211010] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:15.293 09:26:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:15.293 00:07:15.293 real 0m2.543s 00:07:15.293 user 0m2.639s 00:07:15.293 sys 0m0.439s 00:07:15.293 09:26:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:15.293 09:26:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.293 ************************************ 00:07:15.293 END TEST raid0_resize_test 00:07:15.293 ************************************ 00:07:15.293 09:26:03 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:15.293 09:26:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:15.293 09:26:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:15.293 09:26:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:15.293 ************************************ 00:07:15.293 START TEST raid1_resize_test 00:07:15.293 ************************************ 00:07:15.293 09:26:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 1 00:07:15.293 09:26:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:15.293 09:26:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:15.293 09:26:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:15.293 09:26:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:15.293 09:26:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:15.293 09:26:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:15.293 09:26:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:15.293 09:26:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:15.293 09:26:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60884 00:07:15.293 Process raid pid: 60884 00:07:15.293 09:26:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:15.293 09:26:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60884' 00:07:15.293 09:26:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60884 00:07:15.293 09:26:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60884 ']' 00:07:15.294 09:26:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.294 09:26:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:15.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.294 09:26:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.294 09:26:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:15.294 09:26:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.294 [2024-11-15 09:26:03.723992] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:07:15.294 [2024-11-15 09:26:03.724136] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.553 [2024-11-15 09:26:03.906109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.812 [2024-11-15 09:26:04.058279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.072 [2024-11-15 09:26:04.306664] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.072 [2024-11-15 09:26:04.306728] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.332 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:16.332 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@866 -- # return 0 00:07:16.332 09:26:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:16.332 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.332 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.332 Base_1 00:07:16.332 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.332 09:26:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:16.332 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.332 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.332 Base_2 00:07:16.332 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.332 09:26:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:16.332 09:26:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:16.332 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.332 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.332 [2024-11-15 09:26:04.644647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:16.332 [2024-11-15 09:26:04.647071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:16.332 [2024-11-15 09:26:04.647165] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:16.332 [2024-11-15 09:26:04.647181] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:16.333 [2024-11-15 09:26:04.647522] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:16.333 [2024-11-15 09:26:04.647730] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:16.333 [2024-11-15 09:26:04.647751] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:16.333 [2024-11-15 09:26:04.647957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.333 [2024-11-15 09:26:04.656621] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:16.333 [2024-11-15 09:26:04.656666] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:16.333 true 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:16.333 [2024-11-15 09:26:04.672791] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.333 [2024-11-15 09:26:04.724556] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:16.333 [2024-11-15 09:26:04.724598] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:16.333 [2024-11-15 09:26:04.724639] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:16.333 true 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.333 [2024-11-15 09:26:04.740740] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60884 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60884 ']' 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # kill -0 60884 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # uname 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:16.333 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60884 00:07:16.592 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:16.592 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:16.592 killing process with pid 60884 00:07:16.592 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60884' 00:07:16.592 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@971 -- # kill 60884 00:07:16.592 [2024-11-15 09:26:04.828121] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:16.592 09:26:04 bdev_raid.raid1_resize_test -- common/autotest_common.sh@976 -- # wait 60884 00:07:16.592 [2024-11-15 09:26:04.828285] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.592 [2024-11-15 09:26:04.829059] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:16.592 [2024-11-15 09:26:04.829102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:16.592 [2024-11-15 09:26:04.851348] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:17.973 09:26:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:17.973 00:07:17.973 real 0m2.647s 00:07:17.973 user 0m2.738s 00:07:17.973 sys 0m0.460s 00:07:17.973 09:26:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:17.973 09:26:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.973 ************************************ 00:07:17.973 END TEST raid1_resize_test 00:07:17.973 ************************************ 00:07:17.973 09:26:06 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:17.973 09:26:06 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:17.973 09:26:06 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:17.973 09:26:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:17.973 09:26:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:17.973 09:26:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:17.973 ************************************ 00:07:17.973 START TEST raid_state_function_test 00:07:17.973 ************************************ 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 false 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60952 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60952' 00:07:17.973 Process raid pid: 60952 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60952 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 60952 ']' 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:17.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:17.973 09:26:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.233 [2024-11-15 09:26:06.453176] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:07:18.233 [2024-11-15 09:26:06.453316] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.233 [2024-11-15 09:26:06.620010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.493 [2024-11-15 09:26:06.780133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.753 [2024-11-15 09:26:07.062973] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.753 [2024-11-15 09:26:07.063043] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.015 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:19.015 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:07:19.015 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:19.015 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.015 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.015 [2024-11-15 09:26:07.384659] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:19.015 [2024-11-15 09:26:07.384727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:19.015 [2024-11-15 09:26:07.384741] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:19.015 [2024-11-15 09:26:07.384753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:19.015 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.015 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:19.015 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.015 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:19.015 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.015 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.015 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.015 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.015 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.015 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.015 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.015 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.015 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.015 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.015 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.015 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.015 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.015 "name": "Existed_Raid", 00:07:19.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.015 "strip_size_kb": 64, 00:07:19.015 "state": "configuring", 00:07:19.015 "raid_level": "raid0", 00:07:19.015 "superblock": false, 00:07:19.015 "num_base_bdevs": 2, 00:07:19.015 "num_base_bdevs_discovered": 0, 00:07:19.015 "num_base_bdevs_operational": 2, 00:07:19.015 "base_bdevs_list": [ 00:07:19.015 { 00:07:19.015 "name": "BaseBdev1", 00:07:19.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.015 "is_configured": false, 00:07:19.015 "data_offset": 0, 00:07:19.015 "data_size": 0 00:07:19.015 }, 00:07:19.015 { 00:07:19.015 "name": "BaseBdev2", 00:07:19.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.015 "is_configured": false, 00:07:19.015 "data_offset": 0, 00:07:19.015 "data_size": 0 00:07:19.015 } 00:07:19.015 ] 00:07:19.015 }' 00:07:19.015 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.015 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.626 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:19.626 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.626 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.626 [2024-11-15 09:26:07.823918] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:19.626 [2024-11-15 09:26:07.823967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:19.626 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.626 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:19.626 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.626 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.626 [2024-11-15 09:26:07.831860] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:19.627 [2024-11-15 09:26:07.831932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:19.627 [2024-11-15 09:26:07.831945] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:19.627 [2024-11-15 09:26:07.831961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.627 [2024-11-15 09:26:07.890339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:19.627 BaseBdev1 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.627 [ 00:07:19.627 { 00:07:19.627 "name": "BaseBdev1", 00:07:19.627 "aliases": [ 00:07:19.627 "73208d63-5824-4250-a4ef-2b2cff77c9ae" 00:07:19.627 ], 00:07:19.627 "product_name": "Malloc disk", 00:07:19.627 "block_size": 512, 00:07:19.627 "num_blocks": 65536, 00:07:19.627 "uuid": "73208d63-5824-4250-a4ef-2b2cff77c9ae", 00:07:19.627 "assigned_rate_limits": { 00:07:19.627 "rw_ios_per_sec": 0, 00:07:19.627 "rw_mbytes_per_sec": 0, 00:07:19.627 "r_mbytes_per_sec": 0, 00:07:19.627 "w_mbytes_per_sec": 0 00:07:19.627 }, 00:07:19.627 "claimed": true, 00:07:19.627 "claim_type": "exclusive_write", 00:07:19.627 "zoned": false, 00:07:19.627 "supported_io_types": { 00:07:19.627 "read": true, 00:07:19.627 "write": true, 00:07:19.627 "unmap": true, 00:07:19.627 "flush": true, 00:07:19.627 "reset": true, 00:07:19.627 "nvme_admin": false, 00:07:19.627 "nvme_io": false, 00:07:19.627 "nvme_io_md": false, 00:07:19.627 "write_zeroes": true, 00:07:19.627 "zcopy": true, 00:07:19.627 "get_zone_info": false, 00:07:19.627 "zone_management": false, 00:07:19.627 "zone_append": false, 00:07:19.627 "compare": false, 00:07:19.627 "compare_and_write": false, 00:07:19.627 "abort": true, 00:07:19.627 "seek_hole": false, 00:07:19.627 "seek_data": false, 00:07:19.627 "copy": true, 00:07:19.627 "nvme_iov_md": false 00:07:19.627 }, 00:07:19.627 "memory_domains": [ 00:07:19.627 { 00:07:19.627 "dma_device_id": "system", 00:07:19.627 "dma_device_type": 1 00:07:19.627 }, 00:07:19.627 { 00:07:19.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.627 "dma_device_type": 2 00:07:19.627 } 00:07:19.627 ], 00:07:19.627 "driver_specific": {} 00:07:19.627 } 00:07:19.627 ] 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.627 "name": "Existed_Raid", 00:07:19.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.627 "strip_size_kb": 64, 00:07:19.627 "state": "configuring", 00:07:19.627 "raid_level": "raid0", 00:07:19.627 "superblock": false, 00:07:19.627 "num_base_bdevs": 2, 00:07:19.627 "num_base_bdevs_discovered": 1, 00:07:19.627 "num_base_bdevs_operational": 2, 00:07:19.627 "base_bdevs_list": [ 00:07:19.627 { 00:07:19.627 "name": "BaseBdev1", 00:07:19.627 "uuid": "73208d63-5824-4250-a4ef-2b2cff77c9ae", 00:07:19.627 "is_configured": true, 00:07:19.627 "data_offset": 0, 00:07:19.627 "data_size": 65536 00:07:19.627 }, 00:07:19.627 { 00:07:19.627 "name": "BaseBdev2", 00:07:19.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.627 "is_configured": false, 00:07:19.627 "data_offset": 0, 00:07:19.627 "data_size": 0 00:07:19.627 } 00:07:19.627 ] 00:07:19.627 }' 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.627 09:26:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.196 [2024-11-15 09:26:08.369638] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:20.196 [2024-11-15 09:26:08.369715] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.196 [2024-11-15 09:26:08.381700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:20.196 [2024-11-15 09:26:08.384295] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:20.196 [2024-11-15 09:26:08.384348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.196 "name": "Existed_Raid", 00:07:20.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.196 "strip_size_kb": 64, 00:07:20.196 "state": "configuring", 00:07:20.196 "raid_level": "raid0", 00:07:20.196 "superblock": false, 00:07:20.196 "num_base_bdevs": 2, 00:07:20.196 "num_base_bdevs_discovered": 1, 00:07:20.196 "num_base_bdevs_operational": 2, 00:07:20.196 "base_bdevs_list": [ 00:07:20.196 { 00:07:20.196 "name": "BaseBdev1", 00:07:20.196 "uuid": "73208d63-5824-4250-a4ef-2b2cff77c9ae", 00:07:20.196 "is_configured": true, 00:07:20.196 "data_offset": 0, 00:07:20.196 "data_size": 65536 00:07:20.196 }, 00:07:20.196 { 00:07:20.196 "name": "BaseBdev2", 00:07:20.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.196 "is_configured": false, 00:07:20.196 "data_offset": 0, 00:07:20.196 "data_size": 0 00:07:20.196 } 00:07:20.196 ] 00:07:20.196 }' 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.196 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.457 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:20.457 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.457 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.457 [2024-11-15 09:26:08.908418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:20.457 [2024-11-15 09:26:08.908489] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:20.457 [2024-11-15 09:26:08.908501] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:20.457 [2024-11-15 09:26:08.908828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:20.457 [2024-11-15 09:26:08.909050] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:20.457 [2024-11-15 09:26:08.909075] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:20.457 [2024-11-15 09:26:08.909409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.457 BaseBdev2 00:07:20.457 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.457 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:20.457 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:20.457 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:20.457 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:20.457 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:20.457 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:20.457 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:20.457 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.457 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.716 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.717 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:20.717 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.717 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.717 [ 00:07:20.717 { 00:07:20.717 "name": "BaseBdev2", 00:07:20.717 "aliases": [ 00:07:20.717 "9a3e208d-9537-4067-a871-df0db148f7c9" 00:07:20.717 ], 00:07:20.717 "product_name": "Malloc disk", 00:07:20.717 "block_size": 512, 00:07:20.717 "num_blocks": 65536, 00:07:20.717 "uuid": "9a3e208d-9537-4067-a871-df0db148f7c9", 00:07:20.717 "assigned_rate_limits": { 00:07:20.717 "rw_ios_per_sec": 0, 00:07:20.717 "rw_mbytes_per_sec": 0, 00:07:20.717 "r_mbytes_per_sec": 0, 00:07:20.717 "w_mbytes_per_sec": 0 00:07:20.717 }, 00:07:20.717 "claimed": true, 00:07:20.717 "claim_type": "exclusive_write", 00:07:20.717 "zoned": false, 00:07:20.717 "supported_io_types": { 00:07:20.717 "read": true, 00:07:20.717 "write": true, 00:07:20.717 "unmap": true, 00:07:20.717 "flush": true, 00:07:20.717 "reset": true, 00:07:20.717 "nvme_admin": false, 00:07:20.717 "nvme_io": false, 00:07:20.717 "nvme_io_md": false, 00:07:20.717 "write_zeroes": true, 00:07:20.717 "zcopy": true, 00:07:20.717 "get_zone_info": false, 00:07:20.717 "zone_management": false, 00:07:20.717 "zone_append": false, 00:07:20.717 "compare": false, 00:07:20.717 "compare_and_write": false, 00:07:20.717 "abort": true, 00:07:20.717 "seek_hole": false, 00:07:20.717 "seek_data": false, 00:07:20.717 "copy": true, 00:07:20.717 "nvme_iov_md": false 00:07:20.717 }, 00:07:20.717 "memory_domains": [ 00:07:20.717 { 00:07:20.717 "dma_device_id": "system", 00:07:20.717 "dma_device_type": 1 00:07:20.717 }, 00:07:20.717 { 00:07:20.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.717 "dma_device_type": 2 00:07:20.717 } 00:07:20.717 ], 00:07:20.717 "driver_specific": {} 00:07:20.717 } 00:07:20.717 ] 00:07:20.717 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.717 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:20.717 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:20.717 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:20.717 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:20.717 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.717 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:20.717 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:20.717 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.717 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.717 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.717 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.717 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.717 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.717 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.717 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.717 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.717 09:26:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.717 09:26:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.717 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.717 "name": "Existed_Raid", 00:07:20.717 "uuid": "f01a1487-d946-4d0e-a1cc-9c7e45ef6e24", 00:07:20.717 "strip_size_kb": 64, 00:07:20.717 "state": "online", 00:07:20.717 "raid_level": "raid0", 00:07:20.717 "superblock": false, 00:07:20.717 "num_base_bdevs": 2, 00:07:20.717 "num_base_bdevs_discovered": 2, 00:07:20.717 "num_base_bdevs_operational": 2, 00:07:20.717 "base_bdevs_list": [ 00:07:20.717 { 00:07:20.717 "name": "BaseBdev1", 00:07:20.717 "uuid": "73208d63-5824-4250-a4ef-2b2cff77c9ae", 00:07:20.717 "is_configured": true, 00:07:20.717 "data_offset": 0, 00:07:20.717 "data_size": 65536 00:07:20.717 }, 00:07:20.717 { 00:07:20.717 "name": "BaseBdev2", 00:07:20.717 "uuid": "9a3e208d-9537-4067-a871-df0db148f7c9", 00:07:20.717 "is_configured": true, 00:07:20.717 "data_offset": 0, 00:07:20.717 "data_size": 65536 00:07:20.717 } 00:07:20.717 ] 00:07:20.717 }' 00:07:20.717 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.717 09:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.999 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:20.999 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:20.999 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:20.999 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:20.999 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:20.999 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:20.999 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:20.999 09:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.999 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:20.999 09:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.999 [2024-11-15 09:26:09.448023] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.261 09:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.261 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:21.261 "name": "Existed_Raid", 00:07:21.261 "aliases": [ 00:07:21.261 "f01a1487-d946-4d0e-a1cc-9c7e45ef6e24" 00:07:21.261 ], 00:07:21.261 "product_name": "Raid Volume", 00:07:21.261 "block_size": 512, 00:07:21.261 "num_blocks": 131072, 00:07:21.261 "uuid": "f01a1487-d946-4d0e-a1cc-9c7e45ef6e24", 00:07:21.261 "assigned_rate_limits": { 00:07:21.261 "rw_ios_per_sec": 0, 00:07:21.261 "rw_mbytes_per_sec": 0, 00:07:21.261 "r_mbytes_per_sec": 0, 00:07:21.261 "w_mbytes_per_sec": 0 00:07:21.261 }, 00:07:21.261 "claimed": false, 00:07:21.261 "zoned": false, 00:07:21.261 "supported_io_types": { 00:07:21.261 "read": true, 00:07:21.261 "write": true, 00:07:21.261 "unmap": true, 00:07:21.261 "flush": true, 00:07:21.261 "reset": true, 00:07:21.261 "nvme_admin": false, 00:07:21.261 "nvme_io": false, 00:07:21.261 "nvme_io_md": false, 00:07:21.261 "write_zeroes": true, 00:07:21.261 "zcopy": false, 00:07:21.261 "get_zone_info": false, 00:07:21.261 "zone_management": false, 00:07:21.261 "zone_append": false, 00:07:21.261 "compare": false, 00:07:21.261 "compare_and_write": false, 00:07:21.261 "abort": false, 00:07:21.261 "seek_hole": false, 00:07:21.261 "seek_data": false, 00:07:21.261 "copy": false, 00:07:21.261 "nvme_iov_md": false 00:07:21.261 }, 00:07:21.261 "memory_domains": [ 00:07:21.261 { 00:07:21.261 "dma_device_id": "system", 00:07:21.261 "dma_device_type": 1 00:07:21.261 }, 00:07:21.262 { 00:07:21.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.262 "dma_device_type": 2 00:07:21.262 }, 00:07:21.262 { 00:07:21.262 "dma_device_id": "system", 00:07:21.262 "dma_device_type": 1 00:07:21.262 }, 00:07:21.262 { 00:07:21.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.262 "dma_device_type": 2 00:07:21.262 } 00:07:21.262 ], 00:07:21.262 "driver_specific": { 00:07:21.262 "raid": { 00:07:21.262 "uuid": "f01a1487-d946-4d0e-a1cc-9c7e45ef6e24", 00:07:21.262 "strip_size_kb": 64, 00:07:21.262 "state": "online", 00:07:21.262 "raid_level": "raid0", 00:07:21.262 "superblock": false, 00:07:21.262 "num_base_bdevs": 2, 00:07:21.262 "num_base_bdevs_discovered": 2, 00:07:21.262 "num_base_bdevs_operational": 2, 00:07:21.262 "base_bdevs_list": [ 00:07:21.262 { 00:07:21.262 "name": "BaseBdev1", 00:07:21.262 "uuid": "73208d63-5824-4250-a4ef-2b2cff77c9ae", 00:07:21.262 "is_configured": true, 00:07:21.262 "data_offset": 0, 00:07:21.262 "data_size": 65536 00:07:21.262 }, 00:07:21.262 { 00:07:21.262 "name": "BaseBdev2", 00:07:21.262 "uuid": "9a3e208d-9537-4067-a871-df0db148f7c9", 00:07:21.262 "is_configured": true, 00:07:21.262 "data_offset": 0, 00:07:21.262 "data_size": 65536 00:07:21.262 } 00:07:21.262 ] 00:07:21.262 } 00:07:21.262 } 00:07:21.262 }' 00:07:21.262 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:21.262 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:21.262 BaseBdev2' 00:07:21.262 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.262 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:21.262 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.262 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:21.262 09:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.262 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.262 09:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.262 09:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.262 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.262 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.262 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:21.262 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:21.262 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:21.262 09:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.262 09:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.262 09:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.262 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:21.262 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:21.262 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:21.262 09:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.262 09:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.262 [2024-11-15 09:26:09.695350] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:21.262 [2024-11-15 09:26:09.695399] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:21.262 [2024-11-15 09:26:09.695470] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.522 09:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.522 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:21.522 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:21.522 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:21.522 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:21.522 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:21.522 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:21.522 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.522 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:21.522 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:21.522 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.522 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:21.522 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.522 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.522 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.522 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.522 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.522 09:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.522 09:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.522 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.522 09:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.522 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.522 "name": "Existed_Raid", 00:07:21.522 "uuid": "f01a1487-d946-4d0e-a1cc-9c7e45ef6e24", 00:07:21.522 "strip_size_kb": 64, 00:07:21.522 "state": "offline", 00:07:21.522 "raid_level": "raid0", 00:07:21.522 "superblock": false, 00:07:21.522 "num_base_bdevs": 2, 00:07:21.522 "num_base_bdevs_discovered": 1, 00:07:21.522 "num_base_bdevs_operational": 1, 00:07:21.522 "base_bdevs_list": [ 00:07:21.522 { 00:07:21.522 "name": null, 00:07:21.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.522 "is_configured": false, 00:07:21.522 "data_offset": 0, 00:07:21.522 "data_size": 65536 00:07:21.522 }, 00:07:21.522 { 00:07:21.522 "name": "BaseBdev2", 00:07:21.522 "uuid": "9a3e208d-9537-4067-a871-df0db148f7c9", 00:07:21.522 "is_configured": true, 00:07:21.522 "data_offset": 0, 00:07:21.522 "data_size": 65536 00:07:21.522 } 00:07:21.522 ] 00:07:21.522 }' 00:07:21.522 09:26:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.522 09:26:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.915 09:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:21.915 09:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:21.915 09:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.915 09:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.915 09:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:21.915 09:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.915 09:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.915 09:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:21.915 09:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:21.915 09:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:21.915 09:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.915 09:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.176 [2024-11-15 09:26:10.381520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:22.176 [2024-11-15 09:26:10.381636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:22.176 09:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.176 09:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:22.176 09:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:22.176 09:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.176 09:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:22.176 09:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.177 09:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.177 09:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.177 09:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:22.177 09:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:22.177 09:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:22.177 09:26:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60952 00:07:22.177 09:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 60952 ']' 00:07:22.177 09:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 60952 00:07:22.177 09:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:07:22.177 09:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:22.177 09:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60952 00:07:22.177 09:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:22.177 09:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:22.177 killing process with pid 60952 00:07:22.177 09:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60952' 00:07:22.177 09:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 60952 00:07:22.177 [2024-11-15 09:26:10.610802] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:22.177 09:26:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 60952 00:07:22.177 [2024-11-15 09:26:10.633004] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:24.085 09:26:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:24.085 00:07:24.085 real 0m5.705s 00:07:24.085 user 0m8.030s 00:07:24.085 sys 0m0.981s 00:07:24.085 09:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:24.085 09:26:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.085 ************************************ 00:07:24.085 END TEST raid_state_function_test 00:07:24.085 ************************************ 00:07:24.085 09:26:12 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:24.085 09:26:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:24.085 09:26:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:24.085 09:26:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:24.085 ************************************ 00:07:24.085 START TEST raid_state_function_test_sb 00:07:24.085 ************************************ 00:07:24.085 09:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 true 00:07:24.085 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:24.085 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:24.085 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:24.085 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:24.085 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61205 00:07:24.086 Process raid pid: 61205 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61205' 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61205 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 61205 ']' 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:24.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:24.086 09:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.086 [2024-11-15 09:26:12.235030] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:07:24.086 [2024-11-15 09:26:12.235186] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.086 [2024-11-15 09:26:12.423634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.345 [2024-11-15 09:26:12.584340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.624 [2024-11-15 09:26:12.862690] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.624 [2024-11-15 09:26:12.862757] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.917 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:24.917 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:07:24.917 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:24.917 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.917 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.917 [2024-11-15 09:26:13.196523] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:24.917 [2024-11-15 09:26:13.196598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:24.917 [2024-11-15 09:26:13.196612] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:24.917 [2024-11-15 09:26:13.196625] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:24.917 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.917 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:24.917 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.917 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:24.917 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:24.917 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.917 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.918 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.918 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.918 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.918 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.918 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.918 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.918 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.918 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.918 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.918 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.918 "name": "Existed_Raid", 00:07:24.918 "uuid": "5ad25a2a-1f1a-4250-b315-7a1a982f90b4", 00:07:24.918 "strip_size_kb": 64, 00:07:24.918 "state": "configuring", 00:07:24.918 "raid_level": "raid0", 00:07:24.918 "superblock": true, 00:07:24.918 "num_base_bdevs": 2, 00:07:24.918 "num_base_bdevs_discovered": 0, 00:07:24.918 "num_base_bdevs_operational": 2, 00:07:24.918 "base_bdevs_list": [ 00:07:24.918 { 00:07:24.918 "name": "BaseBdev1", 00:07:24.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.918 "is_configured": false, 00:07:24.918 "data_offset": 0, 00:07:24.918 "data_size": 0 00:07:24.918 }, 00:07:24.918 { 00:07:24.918 "name": "BaseBdev2", 00:07:24.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.918 "is_configured": false, 00:07:24.918 "data_offset": 0, 00:07:24.918 "data_size": 0 00:07:24.918 } 00:07:24.918 ] 00:07:24.918 }' 00:07:24.918 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.918 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.488 [2024-11-15 09:26:13.675606] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:25.488 [2024-11-15 09:26:13.675660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.488 [2024-11-15 09:26:13.687589] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:25.488 [2024-11-15 09:26:13.687648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:25.488 [2024-11-15 09:26:13.687659] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:25.488 [2024-11-15 09:26:13.687674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.488 [2024-11-15 09:26:13.750519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:25.488 BaseBdev1 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.488 [ 00:07:25.488 { 00:07:25.488 "name": "BaseBdev1", 00:07:25.488 "aliases": [ 00:07:25.488 "0e1c99fa-32ce-4e49-9c2c-42d437c23d3d" 00:07:25.488 ], 00:07:25.488 "product_name": "Malloc disk", 00:07:25.488 "block_size": 512, 00:07:25.488 "num_blocks": 65536, 00:07:25.488 "uuid": "0e1c99fa-32ce-4e49-9c2c-42d437c23d3d", 00:07:25.488 "assigned_rate_limits": { 00:07:25.488 "rw_ios_per_sec": 0, 00:07:25.488 "rw_mbytes_per_sec": 0, 00:07:25.488 "r_mbytes_per_sec": 0, 00:07:25.488 "w_mbytes_per_sec": 0 00:07:25.488 }, 00:07:25.488 "claimed": true, 00:07:25.488 "claim_type": "exclusive_write", 00:07:25.488 "zoned": false, 00:07:25.488 "supported_io_types": { 00:07:25.488 "read": true, 00:07:25.488 "write": true, 00:07:25.488 "unmap": true, 00:07:25.488 "flush": true, 00:07:25.488 "reset": true, 00:07:25.488 "nvme_admin": false, 00:07:25.488 "nvme_io": false, 00:07:25.488 "nvme_io_md": false, 00:07:25.488 "write_zeroes": true, 00:07:25.488 "zcopy": true, 00:07:25.488 "get_zone_info": false, 00:07:25.488 "zone_management": false, 00:07:25.488 "zone_append": false, 00:07:25.488 "compare": false, 00:07:25.488 "compare_and_write": false, 00:07:25.488 "abort": true, 00:07:25.488 "seek_hole": false, 00:07:25.488 "seek_data": false, 00:07:25.488 "copy": true, 00:07:25.488 "nvme_iov_md": false 00:07:25.488 }, 00:07:25.488 "memory_domains": [ 00:07:25.488 { 00:07:25.488 "dma_device_id": "system", 00:07:25.488 "dma_device_type": 1 00:07:25.488 }, 00:07:25.488 { 00:07:25.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.488 "dma_device_type": 2 00:07:25.488 } 00:07:25.488 ], 00:07:25.488 "driver_specific": {} 00:07:25.488 } 00:07:25.488 ] 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.488 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.489 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.489 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.489 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.489 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.489 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.489 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.489 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.489 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.489 "name": "Existed_Raid", 00:07:25.489 "uuid": "959a21ab-78a5-4749-a776-813344db4e1e", 00:07:25.489 "strip_size_kb": 64, 00:07:25.489 "state": "configuring", 00:07:25.489 "raid_level": "raid0", 00:07:25.489 "superblock": true, 00:07:25.489 "num_base_bdevs": 2, 00:07:25.489 "num_base_bdevs_discovered": 1, 00:07:25.489 "num_base_bdevs_operational": 2, 00:07:25.489 "base_bdevs_list": [ 00:07:25.489 { 00:07:25.489 "name": "BaseBdev1", 00:07:25.489 "uuid": "0e1c99fa-32ce-4e49-9c2c-42d437c23d3d", 00:07:25.489 "is_configured": true, 00:07:25.489 "data_offset": 2048, 00:07:25.489 "data_size": 63488 00:07:25.489 }, 00:07:25.489 { 00:07:25.489 "name": "BaseBdev2", 00:07:25.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.489 "is_configured": false, 00:07:25.489 "data_offset": 0, 00:07:25.489 "data_size": 0 00:07:25.489 } 00:07:25.489 ] 00:07:25.489 }' 00:07:25.489 09:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.489 09:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.058 [2024-11-15 09:26:14.221844] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:26.058 [2024-11-15 09:26:14.221949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.058 [2024-11-15 09:26:14.233962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:26.058 [2024-11-15 09:26:14.236593] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:26.058 [2024-11-15 09:26:14.236661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.058 "name": "Existed_Raid", 00:07:26.058 "uuid": "e1e97c05-7d1c-4166-ada5-397b7c10ce2b", 00:07:26.058 "strip_size_kb": 64, 00:07:26.058 "state": "configuring", 00:07:26.058 "raid_level": "raid0", 00:07:26.058 "superblock": true, 00:07:26.058 "num_base_bdevs": 2, 00:07:26.058 "num_base_bdevs_discovered": 1, 00:07:26.058 "num_base_bdevs_operational": 2, 00:07:26.058 "base_bdevs_list": [ 00:07:26.058 { 00:07:26.058 "name": "BaseBdev1", 00:07:26.058 "uuid": "0e1c99fa-32ce-4e49-9c2c-42d437c23d3d", 00:07:26.058 "is_configured": true, 00:07:26.058 "data_offset": 2048, 00:07:26.058 "data_size": 63488 00:07:26.058 }, 00:07:26.058 { 00:07:26.058 "name": "BaseBdev2", 00:07:26.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.058 "is_configured": false, 00:07:26.058 "data_offset": 0, 00:07:26.058 "data_size": 0 00:07:26.058 } 00:07:26.058 ] 00:07:26.058 }' 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.058 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.316 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:26.316 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.316 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.316 [2024-11-15 09:26:14.744953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:26.316 [2024-11-15 09:26:14.745296] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:26.316 [2024-11-15 09:26:14.745316] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:26.316 [2024-11-15 09:26:14.745685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:26.316 [2024-11-15 09:26:14.745900] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:26.316 [2024-11-15 09:26:14.745925] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:26.316 BaseBdev2 00:07:26.316 [2024-11-15 09:26:14.746114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.316 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.316 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:26.316 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:26.316 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:26.316 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:26.316 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:26.316 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:26.316 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:26.316 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.316 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.316 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.316 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:26.316 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.317 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.317 [ 00:07:26.317 { 00:07:26.317 "name": "BaseBdev2", 00:07:26.317 "aliases": [ 00:07:26.317 "bba6cc01-affc-4a7c-b92a-01d338568861" 00:07:26.317 ], 00:07:26.317 "product_name": "Malloc disk", 00:07:26.317 "block_size": 512, 00:07:26.317 "num_blocks": 65536, 00:07:26.317 "uuid": "bba6cc01-affc-4a7c-b92a-01d338568861", 00:07:26.317 "assigned_rate_limits": { 00:07:26.317 "rw_ios_per_sec": 0, 00:07:26.317 "rw_mbytes_per_sec": 0, 00:07:26.317 "r_mbytes_per_sec": 0, 00:07:26.317 "w_mbytes_per_sec": 0 00:07:26.317 }, 00:07:26.317 "claimed": true, 00:07:26.317 "claim_type": "exclusive_write", 00:07:26.317 "zoned": false, 00:07:26.317 "supported_io_types": { 00:07:26.317 "read": true, 00:07:26.317 "write": true, 00:07:26.317 "unmap": true, 00:07:26.317 "flush": true, 00:07:26.317 "reset": true, 00:07:26.317 "nvme_admin": false, 00:07:26.317 "nvme_io": false, 00:07:26.317 "nvme_io_md": false, 00:07:26.317 "write_zeroes": true, 00:07:26.317 "zcopy": true, 00:07:26.317 "get_zone_info": false, 00:07:26.317 "zone_management": false, 00:07:26.317 "zone_append": false, 00:07:26.317 "compare": false, 00:07:26.317 "compare_and_write": false, 00:07:26.317 "abort": true, 00:07:26.317 "seek_hole": false, 00:07:26.317 "seek_data": false, 00:07:26.317 "copy": true, 00:07:26.317 "nvme_iov_md": false 00:07:26.317 }, 00:07:26.575 "memory_domains": [ 00:07:26.575 { 00:07:26.575 "dma_device_id": "system", 00:07:26.575 "dma_device_type": 1 00:07:26.575 }, 00:07:26.575 { 00:07:26.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.575 "dma_device_type": 2 00:07:26.575 } 00:07:26.575 ], 00:07:26.575 "driver_specific": {} 00:07:26.575 } 00:07:26.575 ] 00:07:26.575 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.575 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:26.575 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:26.575 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:26.575 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:26.575 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.575 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:26.575 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:26.575 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.575 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.575 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.575 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.575 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.575 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.575 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.575 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.575 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.575 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.575 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.575 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.575 "name": "Existed_Raid", 00:07:26.575 "uuid": "e1e97c05-7d1c-4166-ada5-397b7c10ce2b", 00:07:26.575 "strip_size_kb": 64, 00:07:26.575 "state": "online", 00:07:26.575 "raid_level": "raid0", 00:07:26.575 "superblock": true, 00:07:26.575 "num_base_bdevs": 2, 00:07:26.575 "num_base_bdevs_discovered": 2, 00:07:26.575 "num_base_bdevs_operational": 2, 00:07:26.575 "base_bdevs_list": [ 00:07:26.575 { 00:07:26.575 "name": "BaseBdev1", 00:07:26.575 "uuid": "0e1c99fa-32ce-4e49-9c2c-42d437c23d3d", 00:07:26.575 "is_configured": true, 00:07:26.575 "data_offset": 2048, 00:07:26.575 "data_size": 63488 00:07:26.575 }, 00:07:26.575 { 00:07:26.575 "name": "BaseBdev2", 00:07:26.575 "uuid": "bba6cc01-affc-4a7c-b92a-01d338568861", 00:07:26.575 "is_configured": true, 00:07:26.575 "data_offset": 2048, 00:07:26.575 "data_size": 63488 00:07:26.575 } 00:07:26.575 ] 00:07:26.575 }' 00:07:26.575 09:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.575 09:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.834 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:26.834 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:26.834 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:26.834 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:26.834 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:26.834 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:26.834 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:26.834 09:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.834 09:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.834 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:26.834 [2024-11-15 09:26:15.232824] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.834 09:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.834 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:26.834 "name": "Existed_Raid", 00:07:26.834 "aliases": [ 00:07:26.834 "e1e97c05-7d1c-4166-ada5-397b7c10ce2b" 00:07:26.834 ], 00:07:26.834 "product_name": "Raid Volume", 00:07:26.834 "block_size": 512, 00:07:26.834 "num_blocks": 126976, 00:07:26.834 "uuid": "e1e97c05-7d1c-4166-ada5-397b7c10ce2b", 00:07:26.834 "assigned_rate_limits": { 00:07:26.834 "rw_ios_per_sec": 0, 00:07:26.834 "rw_mbytes_per_sec": 0, 00:07:26.834 "r_mbytes_per_sec": 0, 00:07:26.834 "w_mbytes_per_sec": 0 00:07:26.834 }, 00:07:26.834 "claimed": false, 00:07:26.835 "zoned": false, 00:07:26.835 "supported_io_types": { 00:07:26.835 "read": true, 00:07:26.835 "write": true, 00:07:26.835 "unmap": true, 00:07:26.835 "flush": true, 00:07:26.835 "reset": true, 00:07:26.835 "nvme_admin": false, 00:07:26.835 "nvme_io": false, 00:07:26.835 "nvme_io_md": false, 00:07:26.835 "write_zeroes": true, 00:07:26.835 "zcopy": false, 00:07:26.835 "get_zone_info": false, 00:07:26.835 "zone_management": false, 00:07:26.835 "zone_append": false, 00:07:26.835 "compare": false, 00:07:26.835 "compare_and_write": false, 00:07:26.835 "abort": false, 00:07:26.835 "seek_hole": false, 00:07:26.835 "seek_data": false, 00:07:26.835 "copy": false, 00:07:26.835 "nvme_iov_md": false 00:07:26.835 }, 00:07:26.835 "memory_domains": [ 00:07:26.835 { 00:07:26.835 "dma_device_id": "system", 00:07:26.835 "dma_device_type": 1 00:07:26.835 }, 00:07:26.835 { 00:07:26.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.835 "dma_device_type": 2 00:07:26.835 }, 00:07:26.835 { 00:07:26.835 "dma_device_id": "system", 00:07:26.835 "dma_device_type": 1 00:07:26.835 }, 00:07:26.835 { 00:07:26.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.835 "dma_device_type": 2 00:07:26.835 } 00:07:26.835 ], 00:07:26.835 "driver_specific": { 00:07:26.835 "raid": { 00:07:26.835 "uuid": "e1e97c05-7d1c-4166-ada5-397b7c10ce2b", 00:07:26.835 "strip_size_kb": 64, 00:07:26.835 "state": "online", 00:07:26.835 "raid_level": "raid0", 00:07:26.835 "superblock": true, 00:07:26.835 "num_base_bdevs": 2, 00:07:26.835 "num_base_bdevs_discovered": 2, 00:07:26.835 "num_base_bdevs_operational": 2, 00:07:26.835 "base_bdevs_list": [ 00:07:26.835 { 00:07:26.835 "name": "BaseBdev1", 00:07:26.835 "uuid": "0e1c99fa-32ce-4e49-9c2c-42d437c23d3d", 00:07:26.835 "is_configured": true, 00:07:26.835 "data_offset": 2048, 00:07:26.835 "data_size": 63488 00:07:26.835 }, 00:07:26.835 { 00:07:26.835 "name": "BaseBdev2", 00:07:26.835 "uuid": "bba6cc01-affc-4a7c-b92a-01d338568861", 00:07:26.835 "is_configured": true, 00:07:26.835 "data_offset": 2048, 00:07:26.835 "data_size": 63488 00:07:26.835 } 00:07:26.835 ] 00:07:26.835 } 00:07:26.835 } 00:07:26.835 }' 00:07:26.835 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:27.094 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:27.094 BaseBdev2' 00:07:27.094 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.094 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:27.094 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:27.094 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:27.094 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.094 09:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.094 09:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.094 09:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.094 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:27.094 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:27.094 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:27.094 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:27.094 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:27.094 09:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.094 09:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.094 09:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.094 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:27.094 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:27.094 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:27.094 09:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.094 09:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.094 [2024-11-15 09:26:15.468583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:27.094 [2024-11-15 09:26:15.468652] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:27.094 [2024-11-15 09:26:15.468741] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.355 09:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.355 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:27.355 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:27.355 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:27.355 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:27.355 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:27.355 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:27.355 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.355 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:27.355 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:27.355 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.355 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:27.355 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.355 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.355 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.355 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.355 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.355 09:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.355 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.355 09:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.355 09:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.355 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.355 "name": "Existed_Raid", 00:07:27.355 "uuid": "e1e97c05-7d1c-4166-ada5-397b7c10ce2b", 00:07:27.355 "strip_size_kb": 64, 00:07:27.355 "state": "offline", 00:07:27.355 "raid_level": "raid0", 00:07:27.355 "superblock": true, 00:07:27.355 "num_base_bdevs": 2, 00:07:27.355 "num_base_bdevs_discovered": 1, 00:07:27.355 "num_base_bdevs_operational": 1, 00:07:27.355 "base_bdevs_list": [ 00:07:27.355 { 00:07:27.355 "name": null, 00:07:27.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.355 "is_configured": false, 00:07:27.355 "data_offset": 0, 00:07:27.355 "data_size": 63488 00:07:27.355 }, 00:07:27.355 { 00:07:27.355 "name": "BaseBdev2", 00:07:27.355 "uuid": "bba6cc01-affc-4a7c-b92a-01d338568861", 00:07:27.355 "is_configured": true, 00:07:27.355 "data_offset": 2048, 00:07:27.355 "data_size": 63488 00:07:27.355 } 00:07:27.355 ] 00:07:27.355 }' 00:07:27.355 09:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.355 09:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.623 09:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:27.623 09:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:27.623 09:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.623 09:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:27.623 09:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.623 09:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.623 09:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.882 [2024-11-15 09:26:16.095698] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:27.882 [2024-11-15 09:26:16.095776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61205 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 61205 ']' 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 61205 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61205 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:27.882 killing process with pid 61205 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61205' 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 61205 00:07:27.882 09:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 61205 00:07:27.882 [2024-11-15 09:26:16.297449] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:27.882 [2024-11-15 09:26:16.317679] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:29.263 09:26:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:29.263 00:07:29.263 real 0m5.501s 00:07:29.263 user 0m7.727s 00:07:29.263 sys 0m1.029s 00:07:29.263 09:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:29.263 09:26:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.263 ************************************ 00:07:29.264 END TEST raid_state_function_test_sb 00:07:29.264 ************************************ 00:07:29.264 09:26:17 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:29.264 09:26:17 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:29.264 09:26:17 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:29.264 09:26:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:29.264 ************************************ 00:07:29.264 START TEST raid_superblock_test 00:07:29.264 ************************************ 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 2 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61463 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61463 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 61463 ']' 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:29.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:29.264 09:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.523 [2024-11-15 09:26:17.782682] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:07:29.523 [2024-11-15 09:26:17.782862] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61463 ] 00:07:29.523 [2024-11-15 09:26:17.963338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.783 [2024-11-15 09:26:18.104181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.043 [2024-11-15 09:26:18.363587] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.043 [2024-11-15 09:26:18.363684] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.304 malloc1 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.304 [2024-11-15 09:26:18.718424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:30.304 [2024-11-15 09:26:18.718508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.304 [2024-11-15 09:26:18.718538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:30.304 [2024-11-15 09:26:18.718550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.304 [2024-11-15 09:26:18.721616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.304 [2024-11-15 09:26:18.721670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:30.304 pt1 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.304 09:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.564 malloc2 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.564 [2024-11-15 09:26:18.788074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:30.564 [2024-11-15 09:26:18.788147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.564 [2024-11-15 09:26:18.788177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:30.564 [2024-11-15 09:26:18.788187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.564 [2024-11-15 09:26:18.790948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.564 [2024-11-15 09:26:18.790988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:30.564 pt2 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.564 [2024-11-15 09:26:18.800118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:30.564 [2024-11-15 09:26:18.802512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:30.564 [2024-11-15 09:26:18.802716] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:30.564 [2024-11-15 09:26:18.802731] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:30.564 [2024-11-15 09:26:18.803061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:30.564 [2024-11-15 09:26:18.803266] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:30.564 [2024-11-15 09:26:18.803291] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:30.564 [2024-11-15 09:26:18.803475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.564 "name": "raid_bdev1", 00:07:30.564 "uuid": "5417a2d1-a612-49ef-b8a4-2623ae032949", 00:07:30.564 "strip_size_kb": 64, 00:07:30.564 "state": "online", 00:07:30.564 "raid_level": "raid0", 00:07:30.564 "superblock": true, 00:07:30.564 "num_base_bdevs": 2, 00:07:30.564 "num_base_bdevs_discovered": 2, 00:07:30.564 "num_base_bdevs_operational": 2, 00:07:30.564 "base_bdevs_list": [ 00:07:30.564 { 00:07:30.564 "name": "pt1", 00:07:30.564 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:30.564 "is_configured": true, 00:07:30.564 "data_offset": 2048, 00:07:30.564 "data_size": 63488 00:07:30.564 }, 00:07:30.564 { 00:07:30.564 "name": "pt2", 00:07:30.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:30.564 "is_configured": true, 00:07:30.564 "data_offset": 2048, 00:07:30.564 "data_size": 63488 00:07:30.564 } 00:07:30.564 ] 00:07:30.564 }' 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.564 09:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.823 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:30.823 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:30.823 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:30.823 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:30.823 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:30.823 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:30.823 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:30.823 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.823 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:30.823 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.823 [2024-11-15 09:26:19.255698] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:30.823 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:31.082 "name": "raid_bdev1", 00:07:31.082 "aliases": [ 00:07:31.082 "5417a2d1-a612-49ef-b8a4-2623ae032949" 00:07:31.082 ], 00:07:31.082 "product_name": "Raid Volume", 00:07:31.082 "block_size": 512, 00:07:31.082 "num_blocks": 126976, 00:07:31.082 "uuid": "5417a2d1-a612-49ef-b8a4-2623ae032949", 00:07:31.082 "assigned_rate_limits": { 00:07:31.082 "rw_ios_per_sec": 0, 00:07:31.082 "rw_mbytes_per_sec": 0, 00:07:31.082 "r_mbytes_per_sec": 0, 00:07:31.082 "w_mbytes_per_sec": 0 00:07:31.082 }, 00:07:31.082 "claimed": false, 00:07:31.082 "zoned": false, 00:07:31.082 "supported_io_types": { 00:07:31.082 "read": true, 00:07:31.082 "write": true, 00:07:31.082 "unmap": true, 00:07:31.082 "flush": true, 00:07:31.082 "reset": true, 00:07:31.082 "nvme_admin": false, 00:07:31.082 "nvme_io": false, 00:07:31.082 "nvme_io_md": false, 00:07:31.082 "write_zeroes": true, 00:07:31.082 "zcopy": false, 00:07:31.082 "get_zone_info": false, 00:07:31.082 "zone_management": false, 00:07:31.082 "zone_append": false, 00:07:31.082 "compare": false, 00:07:31.082 "compare_and_write": false, 00:07:31.082 "abort": false, 00:07:31.082 "seek_hole": false, 00:07:31.082 "seek_data": false, 00:07:31.082 "copy": false, 00:07:31.082 "nvme_iov_md": false 00:07:31.082 }, 00:07:31.082 "memory_domains": [ 00:07:31.082 { 00:07:31.082 "dma_device_id": "system", 00:07:31.082 "dma_device_type": 1 00:07:31.082 }, 00:07:31.082 { 00:07:31.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.082 "dma_device_type": 2 00:07:31.082 }, 00:07:31.082 { 00:07:31.082 "dma_device_id": "system", 00:07:31.082 "dma_device_type": 1 00:07:31.082 }, 00:07:31.082 { 00:07:31.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.082 "dma_device_type": 2 00:07:31.082 } 00:07:31.082 ], 00:07:31.082 "driver_specific": { 00:07:31.082 "raid": { 00:07:31.082 "uuid": "5417a2d1-a612-49ef-b8a4-2623ae032949", 00:07:31.082 "strip_size_kb": 64, 00:07:31.082 "state": "online", 00:07:31.082 "raid_level": "raid0", 00:07:31.082 "superblock": true, 00:07:31.082 "num_base_bdevs": 2, 00:07:31.082 "num_base_bdevs_discovered": 2, 00:07:31.082 "num_base_bdevs_operational": 2, 00:07:31.082 "base_bdevs_list": [ 00:07:31.082 { 00:07:31.082 "name": "pt1", 00:07:31.082 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:31.082 "is_configured": true, 00:07:31.082 "data_offset": 2048, 00:07:31.082 "data_size": 63488 00:07:31.082 }, 00:07:31.082 { 00:07:31.082 "name": "pt2", 00:07:31.082 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:31.082 "is_configured": true, 00:07:31.082 "data_offset": 2048, 00:07:31.082 "data_size": 63488 00:07:31.082 } 00:07:31.082 ] 00:07:31.082 } 00:07:31.082 } 00:07:31.082 }' 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:31.082 pt2' 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:31.082 [2024-11-15 09:26:19.503286] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5417a2d1-a612-49ef-b8a4-2623ae032949 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5417a2d1-a612-49ef-b8a4-2623ae032949 ']' 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.082 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.342 [2024-11-15 09:26:19.550799] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:31.342 [2024-11-15 09:26:19.550841] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:31.342 [2024-11-15 09:26:19.551005] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:31.342 [2024-11-15 09:26:19.551070] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:31.342 [2024-11-15 09:26:19.551088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:31.342 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.342 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.342 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.342 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.342 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:31.342 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.342 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:31.342 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:31.342 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:31.342 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:31.342 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.342 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.342 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.342 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:31.342 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:31.342 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.342 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.342 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.342 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.343 [2024-11-15 09:26:19.686649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:31.343 [2024-11-15 09:26:19.689241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:31.343 [2024-11-15 09:26:19.689346] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:31.343 [2024-11-15 09:26:19.689416] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:31.343 [2024-11-15 09:26:19.689435] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:31.343 [2024-11-15 09:26:19.689451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:31.343 request: 00:07:31.343 { 00:07:31.343 "name": "raid_bdev1", 00:07:31.343 "raid_level": "raid0", 00:07:31.343 "base_bdevs": [ 00:07:31.343 "malloc1", 00:07:31.343 "malloc2" 00:07:31.343 ], 00:07:31.343 "strip_size_kb": 64, 00:07:31.343 "superblock": false, 00:07:31.343 "method": "bdev_raid_create", 00:07:31.343 "req_id": 1 00:07:31.343 } 00:07:31.343 Got JSON-RPC error response 00:07:31.343 response: 00:07:31.343 { 00:07:31.343 "code": -17, 00:07:31.343 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:31.343 } 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.343 [2024-11-15 09:26:19.754488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:31.343 [2024-11-15 09:26:19.754594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.343 [2024-11-15 09:26:19.754621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:31.343 [2024-11-15 09:26:19.754635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.343 [2024-11-15 09:26:19.757566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.343 [2024-11-15 09:26:19.757617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:31.343 [2024-11-15 09:26:19.757738] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:31.343 [2024-11-15 09:26:19.757824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:31.343 pt1 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.343 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.602 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.602 "name": "raid_bdev1", 00:07:31.602 "uuid": "5417a2d1-a612-49ef-b8a4-2623ae032949", 00:07:31.602 "strip_size_kb": 64, 00:07:31.602 "state": "configuring", 00:07:31.602 "raid_level": "raid0", 00:07:31.602 "superblock": true, 00:07:31.602 "num_base_bdevs": 2, 00:07:31.602 "num_base_bdevs_discovered": 1, 00:07:31.602 "num_base_bdevs_operational": 2, 00:07:31.602 "base_bdevs_list": [ 00:07:31.602 { 00:07:31.602 "name": "pt1", 00:07:31.602 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:31.602 "is_configured": true, 00:07:31.602 "data_offset": 2048, 00:07:31.602 "data_size": 63488 00:07:31.602 }, 00:07:31.602 { 00:07:31.602 "name": null, 00:07:31.602 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:31.602 "is_configured": false, 00:07:31.602 "data_offset": 2048, 00:07:31.602 "data_size": 63488 00:07:31.602 } 00:07:31.602 ] 00:07:31.602 }' 00:07:31.602 09:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.602 09:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.862 [2024-11-15 09:26:20.217761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:31.862 [2024-11-15 09:26:20.217873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.862 [2024-11-15 09:26:20.217907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:31.862 [2024-11-15 09:26:20.217924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.862 [2024-11-15 09:26:20.218521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.862 [2024-11-15 09:26:20.218555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:31.862 [2024-11-15 09:26:20.218663] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:31.862 [2024-11-15 09:26:20.218704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:31.862 [2024-11-15 09:26:20.218868] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:31.862 [2024-11-15 09:26:20.218888] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:31.862 [2024-11-15 09:26:20.219176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:31.862 [2024-11-15 09:26:20.219361] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:31.862 [2024-11-15 09:26:20.219379] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:31.862 [2024-11-15 09:26:20.219561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.862 pt2 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.862 "name": "raid_bdev1", 00:07:31.862 "uuid": "5417a2d1-a612-49ef-b8a4-2623ae032949", 00:07:31.862 "strip_size_kb": 64, 00:07:31.862 "state": "online", 00:07:31.862 "raid_level": "raid0", 00:07:31.862 "superblock": true, 00:07:31.862 "num_base_bdevs": 2, 00:07:31.862 "num_base_bdevs_discovered": 2, 00:07:31.862 "num_base_bdevs_operational": 2, 00:07:31.862 "base_bdevs_list": [ 00:07:31.862 { 00:07:31.862 "name": "pt1", 00:07:31.862 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:31.862 "is_configured": true, 00:07:31.862 "data_offset": 2048, 00:07:31.862 "data_size": 63488 00:07:31.862 }, 00:07:31.862 { 00:07:31.862 "name": "pt2", 00:07:31.862 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:31.862 "is_configured": true, 00:07:31.862 "data_offset": 2048, 00:07:31.862 "data_size": 63488 00:07:31.862 } 00:07:31.862 ] 00:07:31.862 }' 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.862 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:32.455 [2024-11-15 09:26:20.681326] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:32.455 "name": "raid_bdev1", 00:07:32.455 "aliases": [ 00:07:32.455 "5417a2d1-a612-49ef-b8a4-2623ae032949" 00:07:32.455 ], 00:07:32.455 "product_name": "Raid Volume", 00:07:32.455 "block_size": 512, 00:07:32.455 "num_blocks": 126976, 00:07:32.455 "uuid": "5417a2d1-a612-49ef-b8a4-2623ae032949", 00:07:32.455 "assigned_rate_limits": { 00:07:32.455 "rw_ios_per_sec": 0, 00:07:32.455 "rw_mbytes_per_sec": 0, 00:07:32.455 "r_mbytes_per_sec": 0, 00:07:32.455 "w_mbytes_per_sec": 0 00:07:32.455 }, 00:07:32.455 "claimed": false, 00:07:32.455 "zoned": false, 00:07:32.455 "supported_io_types": { 00:07:32.455 "read": true, 00:07:32.455 "write": true, 00:07:32.455 "unmap": true, 00:07:32.455 "flush": true, 00:07:32.455 "reset": true, 00:07:32.455 "nvme_admin": false, 00:07:32.455 "nvme_io": false, 00:07:32.455 "nvme_io_md": false, 00:07:32.455 "write_zeroes": true, 00:07:32.455 "zcopy": false, 00:07:32.455 "get_zone_info": false, 00:07:32.455 "zone_management": false, 00:07:32.455 "zone_append": false, 00:07:32.455 "compare": false, 00:07:32.455 "compare_and_write": false, 00:07:32.455 "abort": false, 00:07:32.455 "seek_hole": false, 00:07:32.455 "seek_data": false, 00:07:32.455 "copy": false, 00:07:32.455 "nvme_iov_md": false 00:07:32.455 }, 00:07:32.455 "memory_domains": [ 00:07:32.455 { 00:07:32.455 "dma_device_id": "system", 00:07:32.455 "dma_device_type": 1 00:07:32.455 }, 00:07:32.455 { 00:07:32.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.455 "dma_device_type": 2 00:07:32.455 }, 00:07:32.455 { 00:07:32.455 "dma_device_id": "system", 00:07:32.455 "dma_device_type": 1 00:07:32.455 }, 00:07:32.455 { 00:07:32.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.455 "dma_device_type": 2 00:07:32.455 } 00:07:32.455 ], 00:07:32.455 "driver_specific": { 00:07:32.455 "raid": { 00:07:32.455 "uuid": "5417a2d1-a612-49ef-b8a4-2623ae032949", 00:07:32.455 "strip_size_kb": 64, 00:07:32.455 "state": "online", 00:07:32.455 "raid_level": "raid0", 00:07:32.455 "superblock": true, 00:07:32.455 "num_base_bdevs": 2, 00:07:32.455 "num_base_bdevs_discovered": 2, 00:07:32.455 "num_base_bdevs_operational": 2, 00:07:32.455 "base_bdevs_list": [ 00:07:32.455 { 00:07:32.455 "name": "pt1", 00:07:32.455 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:32.455 "is_configured": true, 00:07:32.455 "data_offset": 2048, 00:07:32.455 "data_size": 63488 00:07:32.455 }, 00:07:32.455 { 00:07:32.455 "name": "pt2", 00:07:32.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:32.455 "is_configured": true, 00:07:32.455 "data_offset": 2048, 00:07:32.455 "data_size": 63488 00:07:32.455 } 00:07:32.455 ] 00:07:32.455 } 00:07:32.455 } 00:07:32.455 }' 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:32.455 pt2' 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.455 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.455 [2024-11-15 09:26:20.900951] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.721 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.721 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5417a2d1-a612-49ef-b8a4-2623ae032949 '!=' 5417a2d1-a612-49ef-b8a4-2623ae032949 ']' 00:07:32.721 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:32.721 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:32.721 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:32.721 09:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61463 00:07:32.721 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 61463 ']' 00:07:32.721 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 61463 00:07:32.721 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:07:32.721 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:32.721 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61463 00:07:32.721 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:32.721 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:32.721 killing process with pid 61463 00:07:32.721 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61463' 00:07:32.721 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 61463 00:07:32.721 09:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 61463 00:07:32.721 [2024-11-15 09:26:20.972559] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:32.721 [2024-11-15 09:26:20.972702] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:32.721 [2024-11-15 09:26:20.972783] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:32.721 [2024-11-15 09:26:20.972799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:32.980 [2024-11-15 09:26:21.238652] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:34.359 09:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:34.359 00:07:34.359 real 0m4.942s 00:07:34.359 user 0m6.720s 00:07:34.359 sys 0m0.891s 00:07:34.359 09:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:34.359 09:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.359 ************************************ 00:07:34.359 END TEST raid_superblock_test 00:07:34.359 ************************************ 00:07:34.359 09:26:22 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:34.359 09:26:22 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:34.359 09:26:22 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:34.359 09:26:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:34.359 ************************************ 00:07:34.359 START TEST raid_read_error_test 00:07:34.359 ************************************ 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 read 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QN26jEF4eL 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61680 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61680 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 61680 ']' 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:34.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:34.359 09:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.359 [2024-11-15 09:26:22.816702] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:07:34.359 [2024-11-15 09:26:22.816885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61680 ] 00:07:34.619 [2024-11-15 09:26:22.988322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.882 [2024-11-15 09:26:23.145893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.162 [2024-11-15 09:26:23.410023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.162 [2024-11-15 09:26:23.410117] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.422 BaseBdev1_malloc 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.422 true 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.422 [2024-11-15 09:26:23.767644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:35.422 [2024-11-15 09:26:23.767720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.422 [2024-11-15 09:26:23.767744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:35.422 [2024-11-15 09:26:23.767757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.422 [2024-11-15 09:26:23.770565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.422 [2024-11-15 09:26:23.770616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:35.422 BaseBdev1 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.422 BaseBdev2_malloc 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.422 true 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.422 [2024-11-15 09:26:23.843391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:35.422 [2024-11-15 09:26:23.843458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.422 [2024-11-15 09:26:23.843476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:35.422 [2024-11-15 09:26:23.843487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.422 [2024-11-15 09:26:23.846028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.422 [2024-11-15 09:26:23.846069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:35.422 BaseBdev2 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.422 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.422 [2024-11-15 09:26:23.855435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:35.422 [2024-11-15 09:26:23.857676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:35.422 [2024-11-15 09:26:23.857887] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:35.422 [2024-11-15 09:26:23.857906] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:35.422 [2024-11-15 09:26:23.858158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:35.422 [2024-11-15 09:26:23.858369] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:35.422 [2024-11-15 09:26:23.858390] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:35.422 [2024-11-15 09:26:23.858558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.423 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.423 09:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:35.423 09:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:35.423 09:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:35.423 09:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:35.423 09:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.423 09:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.423 09:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.423 09:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.423 09:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.423 09:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.423 09:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.423 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.423 09:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:35.423 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.423 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.683 09:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.683 "name": "raid_bdev1", 00:07:35.683 "uuid": "71e4b88b-791a-4686-a538-06b7fbf45559", 00:07:35.683 "strip_size_kb": 64, 00:07:35.683 "state": "online", 00:07:35.683 "raid_level": "raid0", 00:07:35.683 "superblock": true, 00:07:35.683 "num_base_bdevs": 2, 00:07:35.683 "num_base_bdevs_discovered": 2, 00:07:35.683 "num_base_bdevs_operational": 2, 00:07:35.683 "base_bdevs_list": [ 00:07:35.683 { 00:07:35.683 "name": "BaseBdev1", 00:07:35.683 "uuid": "42775a14-6a9e-5e4c-90c9-4088811900ff", 00:07:35.683 "is_configured": true, 00:07:35.683 "data_offset": 2048, 00:07:35.683 "data_size": 63488 00:07:35.683 }, 00:07:35.683 { 00:07:35.683 "name": "BaseBdev2", 00:07:35.683 "uuid": "c2827d3a-e840-52e8-8445-9bc833888c04", 00:07:35.683 "is_configured": true, 00:07:35.683 "data_offset": 2048, 00:07:35.683 "data_size": 63488 00:07:35.683 } 00:07:35.683 ] 00:07:35.683 }' 00:07:35.683 09:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.683 09:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.941 09:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:35.941 09:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:36.200 [2024-11-15 09:26:24.436331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:37.137 09:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:37.137 09:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.137 09:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.137 09:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.137 09:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:37.137 09:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:37.137 09:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:37.137 09:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:37.137 09:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:37.137 09:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.137 09:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.137 09:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.137 09:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.137 09:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.137 09:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.137 09:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.137 09:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.137 09:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.137 09:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.137 09:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.137 09:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.137 09:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.137 09:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.137 "name": "raid_bdev1", 00:07:37.138 "uuid": "71e4b88b-791a-4686-a538-06b7fbf45559", 00:07:37.138 "strip_size_kb": 64, 00:07:37.138 "state": "online", 00:07:37.138 "raid_level": "raid0", 00:07:37.138 "superblock": true, 00:07:37.138 "num_base_bdevs": 2, 00:07:37.138 "num_base_bdevs_discovered": 2, 00:07:37.138 "num_base_bdevs_operational": 2, 00:07:37.138 "base_bdevs_list": [ 00:07:37.138 { 00:07:37.138 "name": "BaseBdev1", 00:07:37.138 "uuid": "42775a14-6a9e-5e4c-90c9-4088811900ff", 00:07:37.138 "is_configured": true, 00:07:37.138 "data_offset": 2048, 00:07:37.138 "data_size": 63488 00:07:37.138 }, 00:07:37.138 { 00:07:37.138 "name": "BaseBdev2", 00:07:37.138 "uuid": "c2827d3a-e840-52e8-8445-9bc833888c04", 00:07:37.138 "is_configured": true, 00:07:37.138 "data_offset": 2048, 00:07:37.138 "data_size": 63488 00:07:37.138 } 00:07:37.138 ] 00:07:37.138 }' 00:07:37.138 09:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.138 09:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.409 09:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:37.409 09:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.409 09:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.409 [2024-11-15 09:26:25.856080] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:37.409 [2024-11-15 09:26:25.856139] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:37.409 [2024-11-15 09:26:25.859192] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.409 [2024-11-15 09:26:25.859256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.409 [2024-11-15 09:26:25.859299] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:37.409 [2024-11-15 09:26:25.859313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:37.409 { 00:07:37.409 "results": [ 00:07:37.409 { 00:07:37.409 "job": "raid_bdev1", 00:07:37.409 "core_mask": "0x1", 00:07:37.409 "workload": "randrw", 00:07:37.409 "percentage": 50, 00:07:37.409 "status": "finished", 00:07:37.409 "queue_depth": 1, 00:07:37.409 "io_size": 131072, 00:07:37.409 "runtime": 1.419897, 00:07:37.409 "iops": 11777.61485516203, 00:07:37.409 "mibps": 1472.2018568952537, 00:07:37.409 "io_failed": 1, 00:07:37.409 "io_timeout": 0, 00:07:37.409 "avg_latency_us": 119.49809128214663, 00:07:37.409 "min_latency_us": 28.39475982532751, 00:07:37.409 "max_latency_us": 1817.2646288209608 00:07:37.409 } 00:07:37.409 ], 00:07:37.409 "core_count": 1 00:07:37.409 } 00:07:37.409 09:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.409 09:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61680 00:07:37.409 09:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 61680 ']' 00:07:37.409 09:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 61680 00:07:37.693 09:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:07:37.693 09:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:37.693 09:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61680 00:07:37.693 09:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:37.693 09:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:37.693 killing process with pid 61680 00:07:37.693 09:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61680' 00:07:37.693 09:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 61680 00:07:37.693 [2024-11-15 09:26:25.891860] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:37.693 09:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 61680 00:07:37.693 [2024-11-15 09:26:26.081054] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:39.080 09:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:39.080 09:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QN26jEF4eL 00:07:39.080 09:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:39.080 09:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:07:39.080 09:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:39.080 09:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:39.080 09:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:39.080 09:26:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:07:39.080 00:07:39.080 real 0m4.771s 00:07:39.080 user 0m5.606s 00:07:39.080 sys 0m0.720s 00:07:39.080 09:26:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:39.080 09:26:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.080 ************************************ 00:07:39.080 END TEST raid_read_error_test 00:07:39.080 ************************************ 00:07:39.080 09:26:27 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:39.080 09:26:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:39.080 09:26:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:39.080 09:26:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:39.080 ************************************ 00:07:39.080 START TEST raid_write_error_test 00:07:39.080 ************************************ 00:07:39.080 09:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 write 00:07:39.080 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:39.080 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:39.080 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:39.080 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:39.080 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:39.080 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:39.080 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:39.080 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:39.080 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:39.080 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:39.080 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:39.080 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:39.080 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:39.081 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:39.081 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:39.081 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:39.081 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:39.081 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:39.081 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:39.081 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:39.081 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:39.340 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:39.340 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.uHVXlGRcl7 00:07:39.340 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61820 00:07:39.340 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61820 00:07:39.340 09:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:39.340 09:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 61820 ']' 00:07:39.340 09:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.340 09:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:39.340 09:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.340 09:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:39.340 09:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.340 [2024-11-15 09:26:27.654259] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:07:39.340 [2024-11-15 09:26:27.654414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61820 ] 00:07:39.600 [2024-11-15 09:26:27.822240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.600 [2024-11-15 09:26:27.966160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.859 [2024-11-15 09:26:28.227093] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.859 [2024-11-15 09:26:28.227189] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.118 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:40.118 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:07:40.118 09:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:40.118 09:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:40.118 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.118 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.379 BaseBdev1_malloc 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.379 true 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.379 [2024-11-15 09:26:28.604095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:40.379 [2024-11-15 09:26:28.604241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.379 [2024-11-15 09:26:28.604287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:40.379 [2024-11-15 09:26:28.604302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.379 [2024-11-15 09:26:28.607089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.379 [2024-11-15 09:26:28.607129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:40.379 BaseBdev1 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.379 BaseBdev2_malloc 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.379 true 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.379 [2024-11-15 09:26:28.680020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:40.379 [2024-11-15 09:26:28.680140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.379 [2024-11-15 09:26:28.680163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:40.379 [2024-11-15 09:26:28.680175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.379 [2024-11-15 09:26:28.682641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.379 [2024-11-15 09:26:28.682681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:40.379 BaseBdev2 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.379 [2024-11-15 09:26:28.692091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:40.379 [2024-11-15 09:26:28.694322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:40.379 [2024-11-15 09:26:28.694556] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:40.379 [2024-11-15 09:26:28.694576] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:40.379 [2024-11-15 09:26:28.694890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:40.379 [2024-11-15 09:26:28.695108] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:40.379 [2024-11-15 09:26:28.695218] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:40.379 [2024-11-15 09:26:28.695428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.379 09:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.379 "name": "raid_bdev1", 00:07:40.379 "uuid": "01e6ac5d-76c5-4162-be44-e11a82523ca6", 00:07:40.379 "strip_size_kb": 64, 00:07:40.379 "state": "online", 00:07:40.379 "raid_level": "raid0", 00:07:40.379 "superblock": true, 00:07:40.379 "num_base_bdevs": 2, 00:07:40.379 "num_base_bdevs_discovered": 2, 00:07:40.379 "num_base_bdevs_operational": 2, 00:07:40.379 "base_bdevs_list": [ 00:07:40.380 { 00:07:40.380 "name": "BaseBdev1", 00:07:40.380 "uuid": "9bb8db97-2978-5921-82b7-0c8adbb669a7", 00:07:40.380 "is_configured": true, 00:07:40.380 "data_offset": 2048, 00:07:40.380 "data_size": 63488 00:07:40.380 }, 00:07:40.380 { 00:07:40.380 "name": "BaseBdev2", 00:07:40.380 "uuid": "b08574a7-e1de-5b1d-8287-6e48123defec", 00:07:40.380 "is_configured": true, 00:07:40.380 "data_offset": 2048, 00:07:40.380 "data_size": 63488 00:07:40.380 } 00:07:40.380 ] 00:07:40.380 }' 00:07:40.380 09:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.380 09:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.950 09:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:40.950 09:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:40.950 [2024-11-15 09:26:29.296913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:41.888 09:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:41.888 09:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.888 09:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.888 09:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.888 09:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:41.888 09:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:41.888 09:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:41.888 09:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:41.888 09:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:41.888 09:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.888 09:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.888 09:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.888 09:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.888 09:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.888 09:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.888 09:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.888 09:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.888 09:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.888 09:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.888 09:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:41.888 09:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.888 09:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.888 09:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.888 "name": "raid_bdev1", 00:07:41.888 "uuid": "01e6ac5d-76c5-4162-be44-e11a82523ca6", 00:07:41.888 "strip_size_kb": 64, 00:07:41.888 "state": "online", 00:07:41.888 "raid_level": "raid0", 00:07:41.888 "superblock": true, 00:07:41.888 "num_base_bdevs": 2, 00:07:41.888 "num_base_bdevs_discovered": 2, 00:07:41.888 "num_base_bdevs_operational": 2, 00:07:41.888 "base_bdevs_list": [ 00:07:41.888 { 00:07:41.888 "name": "BaseBdev1", 00:07:41.888 "uuid": "9bb8db97-2978-5921-82b7-0c8adbb669a7", 00:07:41.888 "is_configured": true, 00:07:41.888 "data_offset": 2048, 00:07:41.888 "data_size": 63488 00:07:41.888 }, 00:07:41.888 { 00:07:41.888 "name": "BaseBdev2", 00:07:41.888 "uuid": "b08574a7-e1de-5b1d-8287-6e48123defec", 00:07:41.888 "is_configured": true, 00:07:41.888 "data_offset": 2048, 00:07:41.888 "data_size": 63488 00:07:41.888 } 00:07:41.888 ] 00:07:41.888 }' 00:07:41.888 09:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.888 09:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.469 09:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:42.469 09:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.469 09:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.469 [2024-11-15 09:26:30.673930] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:42.469 [2024-11-15 09:26:30.673976] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:42.469 [2024-11-15 09:26:30.676722] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:42.469 [2024-11-15 09:26:30.676767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.469 [2024-11-15 09:26:30.676804] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:42.469 [2024-11-15 09:26:30.676818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:42.469 { 00:07:42.469 "results": [ 00:07:42.469 { 00:07:42.469 "job": "raid_bdev1", 00:07:42.469 "core_mask": "0x1", 00:07:42.469 "workload": "randrw", 00:07:42.469 "percentage": 50, 00:07:42.469 "status": "finished", 00:07:42.469 "queue_depth": 1, 00:07:42.469 "io_size": 131072, 00:07:42.469 "runtime": 1.377173, 00:07:42.469 "iops": 13446.38618387087, 00:07:42.469 "mibps": 1680.7982729838589, 00:07:42.469 "io_failed": 1, 00:07:42.469 "io_timeout": 0, 00:07:42.469 "avg_latency_us": 104.6384005474373, 00:07:42.469 "min_latency_us": 26.606113537117903, 00:07:42.469 "max_latency_us": 1430.9170305676855 00:07:42.469 } 00:07:42.469 ], 00:07:42.469 "core_count": 1 00:07:42.469 } 00:07:42.469 09:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.469 09:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61820 00:07:42.469 09:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 61820 ']' 00:07:42.469 09:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 61820 00:07:42.469 09:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:07:42.469 09:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:42.469 09:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61820 00:07:42.469 09:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:42.469 09:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:42.469 09:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61820' 00:07:42.469 killing process with pid 61820 00:07:42.469 09:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 61820 00:07:42.469 [2024-11-15 09:26:30.720601] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:42.469 09:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 61820 00:07:42.469 [2024-11-15 09:26:30.882351] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:43.886 09:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:43.886 09:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.uHVXlGRcl7 00:07:43.886 09:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:43.886 09:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:43.886 09:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:43.886 09:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:43.886 09:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:43.886 ************************************ 00:07:43.886 END TEST raid_write_error_test 00:07:43.886 ************************************ 00:07:43.886 09:26:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:43.886 00:07:43.886 real 0m4.729s 00:07:43.886 user 0m5.593s 00:07:43.886 sys 0m0.686s 00:07:43.886 09:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:43.886 09:26:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.886 09:26:32 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:43.886 09:26:32 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:43.886 09:26:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:43.886 09:26:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:43.886 09:26:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:43.886 ************************************ 00:07:43.886 START TEST raid_state_function_test 00:07:43.886 ************************************ 00:07:43.886 09:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 false 00:07:43.886 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:43.886 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:43.886 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:43.886 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:43.886 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:43.886 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.886 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:43.886 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:43.886 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.886 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:43.886 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:43.886 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.886 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:43.886 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:43.886 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:43.886 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:43.886 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:43.886 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:43.886 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:43.886 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:43.886 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:43.886 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:43.886 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:44.145 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61969 00:07:44.145 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:44.145 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61969' 00:07:44.145 Process raid pid: 61969 00:07:44.145 09:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61969 00:07:44.145 09:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 61969 ']' 00:07:44.145 09:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.145 09:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:44.145 09:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.145 09:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:44.145 09:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.145 [2024-11-15 09:26:32.447718] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:07:44.145 [2024-11-15 09:26:32.447889] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.404 [2024-11-15 09:26:32.627210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.404 [2024-11-15 09:26:32.773099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.663 [2024-11-15 09:26:33.042741] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.663 [2024-11-15 09:26:33.042806] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.922 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:44.922 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:07:44.922 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:44.922 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.922 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.922 [2024-11-15 09:26:33.344771] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:44.922 [2024-11-15 09:26:33.344866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:44.922 [2024-11-15 09:26:33.344881] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.922 [2024-11-15 09:26:33.344893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.922 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.922 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:44.922 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.922 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.922 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:44.922 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.922 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.922 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.922 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.922 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.922 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.922 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.922 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.922 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.922 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.922 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.182 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.182 "name": "Existed_Raid", 00:07:45.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.182 "strip_size_kb": 64, 00:07:45.182 "state": "configuring", 00:07:45.182 "raid_level": "concat", 00:07:45.182 "superblock": false, 00:07:45.182 "num_base_bdevs": 2, 00:07:45.182 "num_base_bdevs_discovered": 0, 00:07:45.182 "num_base_bdevs_operational": 2, 00:07:45.182 "base_bdevs_list": [ 00:07:45.182 { 00:07:45.182 "name": "BaseBdev1", 00:07:45.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.182 "is_configured": false, 00:07:45.182 "data_offset": 0, 00:07:45.182 "data_size": 0 00:07:45.182 }, 00:07:45.182 { 00:07:45.182 "name": "BaseBdev2", 00:07:45.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.182 "is_configured": false, 00:07:45.182 "data_offset": 0, 00:07:45.182 "data_size": 0 00:07:45.182 } 00:07:45.182 ] 00:07:45.182 }' 00:07:45.182 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.182 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.442 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:45.442 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.442 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.442 [2024-11-15 09:26:33.831671] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:45.442 [2024-11-15 09:26:33.831786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:45.442 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.442 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:45.442 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.442 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.442 [2024-11-15 09:26:33.843602] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:45.442 [2024-11-15 09:26:33.843690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:45.442 [2024-11-15 09:26:33.843719] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:45.442 [2024-11-15 09:26:33.843745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:45.442 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.442 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:45.442 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.442 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.442 [2024-11-15 09:26:33.899472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:45.442 BaseBdev1 00:07:45.442 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.442 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:45.442 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:45.442 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:45.442 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:45.442 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:45.443 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:45.443 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:45.443 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.443 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.700 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.700 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:45.701 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.701 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.701 [ 00:07:45.701 { 00:07:45.701 "name": "BaseBdev1", 00:07:45.701 "aliases": [ 00:07:45.701 "0177e5ad-ce6f-4ee9-bcb5-461d7aed4bae" 00:07:45.701 ], 00:07:45.701 "product_name": "Malloc disk", 00:07:45.701 "block_size": 512, 00:07:45.701 "num_blocks": 65536, 00:07:45.701 "uuid": "0177e5ad-ce6f-4ee9-bcb5-461d7aed4bae", 00:07:45.701 "assigned_rate_limits": { 00:07:45.701 "rw_ios_per_sec": 0, 00:07:45.701 "rw_mbytes_per_sec": 0, 00:07:45.701 "r_mbytes_per_sec": 0, 00:07:45.701 "w_mbytes_per_sec": 0 00:07:45.701 }, 00:07:45.701 "claimed": true, 00:07:45.701 "claim_type": "exclusive_write", 00:07:45.701 "zoned": false, 00:07:45.701 "supported_io_types": { 00:07:45.701 "read": true, 00:07:45.701 "write": true, 00:07:45.701 "unmap": true, 00:07:45.701 "flush": true, 00:07:45.701 "reset": true, 00:07:45.701 "nvme_admin": false, 00:07:45.701 "nvme_io": false, 00:07:45.701 "nvme_io_md": false, 00:07:45.701 "write_zeroes": true, 00:07:45.701 "zcopy": true, 00:07:45.701 "get_zone_info": false, 00:07:45.701 "zone_management": false, 00:07:45.701 "zone_append": false, 00:07:45.701 "compare": false, 00:07:45.701 "compare_and_write": false, 00:07:45.701 "abort": true, 00:07:45.701 "seek_hole": false, 00:07:45.701 "seek_data": false, 00:07:45.701 "copy": true, 00:07:45.701 "nvme_iov_md": false 00:07:45.701 }, 00:07:45.701 "memory_domains": [ 00:07:45.701 { 00:07:45.701 "dma_device_id": "system", 00:07:45.701 "dma_device_type": 1 00:07:45.701 }, 00:07:45.701 { 00:07:45.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.701 "dma_device_type": 2 00:07:45.701 } 00:07:45.701 ], 00:07:45.701 "driver_specific": {} 00:07:45.701 } 00:07:45.701 ] 00:07:45.701 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.701 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:45.701 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:45.701 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.701 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.701 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:45.701 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.701 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.701 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.701 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.701 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.701 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.701 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.701 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.701 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.701 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.701 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.701 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.701 "name": "Existed_Raid", 00:07:45.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.701 "strip_size_kb": 64, 00:07:45.701 "state": "configuring", 00:07:45.701 "raid_level": "concat", 00:07:45.701 "superblock": false, 00:07:45.701 "num_base_bdevs": 2, 00:07:45.701 "num_base_bdevs_discovered": 1, 00:07:45.701 "num_base_bdevs_operational": 2, 00:07:45.701 "base_bdevs_list": [ 00:07:45.701 { 00:07:45.701 "name": "BaseBdev1", 00:07:45.701 "uuid": "0177e5ad-ce6f-4ee9-bcb5-461d7aed4bae", 00:07:45.701 "is_configured": true, 00:07:45.701 "data_offset": 0, 00:07:45.701 "data_size": 65536 00:07:45.701 }, 00:07:45.701 { 00:07:45.701 "name": "BaseBdev2", 00:07:45.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.701 "is_configured": false, 00:07:45.701 "data_offset": 0, 00:07:45.701 "data_size": 0 00:07:45.701 } 00:07:45.701 ] 00:07:45.701 }' 00:07:45.701 09:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.701 09:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.960 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:45.960 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.960 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.960 [2024-11-15 09:26:34.334841] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:45.961 [2024-11-15 09:26:34.335002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:45.961 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.961 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:45.961 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.961 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.961 [2024-11-15 09:26:34.346907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:45.961 [2024-11-15 09:26:34.349506] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:45.961 [2024-11-15 09:26:34.349606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:45.961 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.961 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:45.961 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:45.961 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:45.961 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.961 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.961 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:45.961 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.961 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.961 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.961 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.961 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.961 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.961 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.961 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.961 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.961 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.961 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.961 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.961 "name": "Existed_Raid", 00:07:45.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.961 "strip_size_kb": 64, 00:07:45.961 "state": "configuring", 00:07:45.961 "raid_level": "concat", 00:07:45.961 "superblock": false, 00:07:45.961 "num_base_bdevs": 2, 00:07:45.961 "num_base_bdevs_discovered": 1, 00:07:45.961 "num_base_bdevs_operational": 2, 00:07:45.961 "base_bdevs_list": [ 00:07:45.961 { 00:07:45.961 "name": "BaseBdev1", 00:07:45.961 "uuid": "0177e5ad-ce6f-4ee9-bcb5-461d7aed4bae", 00:07:45.961 "is_configured": true, 00:07:45.961 "data_offset": 0, 00:07:45.961 "data_size": 65536 00:07:45.961 }, 00:07:45.961 { 00:07:45.961 "name": "BaseBdev2", 00:07:45.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.961 "is_configured": false, 00:07:45.961 "data_offset": 0, 00:07:45.961 "data_size": 0 00:07:45.961 } 00:07:45.961 ] 00:07:45.961 }' 00:07:45.961 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.961 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.530 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:46.530 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.530 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.530 [2024-11-15 09:26:34.849061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:46.530 [2024-11-15 09:26:34.849134] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:46.530 [2024-11-15 09:26:34.849144] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:46.530 [2024-11-15 09:26:34.849494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:46.530 [2024-11-15 09:26:34.849695] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:46.530 [2024-11-15 09:26:34.849712] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:46.530 [2024-11-15 09:26:34.850072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.530 BaseBdev2 00:07:46.530 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.530 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:46.530 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:46.530 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:46.530 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:07:46.530 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:46.530 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:46.530 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:46.530 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.530 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.530 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.530 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:46.531 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.531 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.531 [ 00:07:46.531 { 00:07:46.531 "name": "BaseBdev2", 00:07:46.531 "aliases": [ 00:07:46.531 "486e308c-fa5a-483e-aa3a-40acd455ee9e" 00:07:46.531 ], 00:07:46.531 "product_name": "Malloc disk", 00:07:46.531 "block_size": 512, 00:07:46.531 "num_blocks": 65536, 00:07:46.531 "uuid": "486e308c-fa5a-483e-aa3a-40acd455ee9e", 00:07:46.531 "assigned_rate_limits": { 00:07:46.531 "rw_ios_per_sec": 0, 00:07:46.531 "rw_mbytes_per_sec": 0, 00:07:46.531 "r_mbytes_per_sec": 0, 00:07:46.531 "w_mbytes_per_sec": 0 00:07:46.531 }, 00:07:46.531 "claimed": true, 00:07:46.531 "claim_type": "exclusive_write", 00:07:46.531 "zoned": false, 00:07:46.531 "supported_io_types": { 00:07:46.531 "read": true, 00:07:46.531 "write": true, 00:07:46.531 "unmap": true, 00:07:46.531 "flush": true, 00:07:46.531 "reset": true, 00:07:46.531 "nvme_admin": false, 00:07:46.531 "nvme_io": false, 00:07:46.531 "nvme_io_md": false, 00:07:46.531 "write_zeroes": true, 00:07:46.531 "zcopy": true, 00:07:46.531 "get_zone_info": false, 00:07:46.531 "zone_management": false, 00:07:46.531 "zone_append": false, 00:07:46.531 "compare": false, 00:07:46.531 "compare_and_write": false, 00:07:46.531 "abort": true, 00:07:46.531 "seek_hole": false, 00:07:46.531 "seek_data": false, 00:07:46.531 "copy": true, 00:07:46.531 "nvme_iov_md": false 00:07:46.531 }, 00:07:46.531 "memory_domains": [ 00:07:46.531 { 00:07:46.531 "dma_device_id": "system", 00:07:46.531 "dma_device_type": 1 00:07:46.531 }, 00:07:46.531 { 00:07:46.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.531 "dma_device_type": 2 00:07:46.531 } 00:07:46.531 ], 00:07:46.531 "driver_specific": {} 00:07:46.531 } 00:07:46.531 ] 00:07:46.531 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.531 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:07:46.531 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:46.531 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:46.531 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:46.531 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.531 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.531 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:46.531 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.531 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.531 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.531 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.531 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.531 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.531 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.531 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.531 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.531 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.531 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.531 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.531 "name": "Existed_Raid", 00:07:46.531 "uuid": "17a86731-4198-4524-b4ae-4a651818cdba", 00:07:46.531 "strip_size_kb": 64, 00:07:46.531 "state": "online", 00:07:46.531 "raid_level": "concat", 00:07:46.531 "superblock": false, 00:07:46.531 "num_base_bdevs": 2, 00:07:46.531 "num_base_bdevs_discovered": 2, 00:07:46.531 "num_base_bdevs_operational": 2, 00:07:46.531 "base_bdevs_list": [ 00:07:46.531 { 00:07:46.531 "name": "BaseBdev1", 00:07:46.531 "uuid": "0177e5ad-ce6f-4ee9-bcb5-461d7aed4bae", 00:07:46.531 "is_configured": true, 00:07:46.531 "data_offset": 0, 00:07:46.531 "data_size": 65536 00:07:46.531 }, 00:07:46.531 { 00:07:46.531 "name": "BaseBdev2", 00:07:46.531 "uuid": "486e308c-fa5a-483e-aa3a-40acd455ee9e", 00:07:46.531 "is_configured": true, 00:07:46.531 "data_offset": 0, 00:07:46.531 "data_size": 65536 00:07:46.531 } 00:07:46.531 ] 00:07:46.531 }' 00:07:46.531 09:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.531 09:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.101 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:47.101 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:47.101 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:47.101 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:47.101 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:47.101 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:47.101 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:47.101 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:47.101 09:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.101 09:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.101 [2024-11-15 09:26:35.308691] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.101 09:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.101 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:47.101 "name": "Existed_Raid", 00:07:47.101 "aliases": [ 00:07:47.101 "17a86731-4198-4524-b4ae-4a651818cdba" 00:07:47.101 ], 00:07:47.101 "product_name": "Raid Volume", 00:07:47.101 "block_size": 512, 00:07:47.101 "num_blocks": 131072, 00:07:47.101 "uuid": "17a86731-4198-4524-b4ae-4a651818cdba", 00:07:47.101 "assigned_rate_limits": { 00:07:47.101 "rw_ios_per_sec": 0, 00:07:47.101 "rw_mbytes_per_sec": 0, 00:07:47.101 "r_mbytes_per_sec": 0, 00:07:47.101 "w_mbytes_per_sec": 0 00:07:47.101 }, 00:07:47.101 "claimed": false, 00:07:47.101 "zoned": false, 00:07:47.101 "supported_io_types": { 00:07:47.101 "read": true, 00:07:47.101 "write": true, 00:07:47.101 "unmap": true, 00:07:47.101 "flush": true, 00:07:47.101 "reset": true, 00:07:47.101 "nvme_admin": false, 00:07:47.101 "nvme_io": false, 00:07:47.101 "nvme_io_md": false, 00:07:47.101 "write_zeroes": true, 00:07:47.101 "zcopy": false, 00:07:47.101 "get_zone_info": false, 00:07:47.101 "zone_management": false, 00:07:47.101 "zone_append": false, 00:07:47.101 "compare": false, 00:07:47.101 "compare_and_write": false, 00:07:47.101 "abort": false, 00:07:47.101 "seek_hole": false, 00:07:47.101 "seek_data": false, 00:07:47.101 "copy": false, 00:07:47.101 "nvme_iov_md": false 00:07:47.101 }, 00:07:47.101 "memory_domains": [ 00:07:47.101 { 00:07:47.101 "dma_device_id": "system", 00:07:47.101 "dma_device_type": 1 00:07:47.101 }, 00:07:47.101 { 00:07:47.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.101 "dma_device_type": 2 00:07:47.101 }, 00:07:47.101 { 00:07:47.101 "dma_device_id": "system", 00:07:47.101 "dma_device_type": 1 00:07:47.101 }, 00:07:47.101 { 00:07:47.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.101 "dma_device_type": 2 00:07:47.101 } 00:07:47.101 ], 00:07:47.101 "driver_specific": { 00:07:47.101 "raid": { 00:07:47.101 "uuid": "17a86731-4198-4524-b4ae-4a651818cdba", 00:07:47.101 "strip_size_kb": 64, 00:07:47.101 "state": "online", 00:07:47.101 "raid_level": "concat", 00:07:47.101 "superblock": false, 00:07:47.101 "num_base_bdevs": 2, 00:07:47.101 "num_base_bdevs_discovered": 2, 00:07:47.101 "num_base_bdevs_operational": 2, 00:07:47.101 "base_bdevs_list": [ 00:07:47.101 { 00:07:47.101 "name": "BaseBdev1", 00:07:47.102 "uuid": "0177e5ad-ce6f-4ee9-bcb5-461d7aed4bae", 00:07:47.102 "is_configured": true, 00:07:47.102 "data_offset": 0, 00:07:47.102 "data_size": 65536 00:07:47.102 }, 00:07:47.102 { 00:07:47.102 "name": "BaseBdev2", 00:07:47.102 "uuid": "486e308c-fa5a-483e-aa3a-40acd455ee9e", 00:07:47.102 "is_configured": true, 00:07:47.102 "data_offset": 0, 00:07:47.102 "data_size": 65536 00:07:47.102 } 00:07:47.102 ] 00:07:47.102 } 00:07:47.102 } 00:07:47.102 }' 00:07:47.102 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:47.102 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:47.102 BaseBdev2' 00:07:47.102 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.102 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:47.102 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.102 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:47.102 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.102 09:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.102 09:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.102 09:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.102 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.102 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.102 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.102 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:47.102 09:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.102 09:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.102 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.102 09:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.102 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.102 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.102 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:47.102 09:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.102 09:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.102 [2024-11-15 09:26:35.560092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:47.102 [2024-11-15 09:26:35.560133] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:47.102 [2024-11-15 09:26:35.560192] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.361 09:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.361 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:47.361 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:47.361 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:47.361 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:47.361 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:47.361 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:47.361 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.361 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:47.361 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:47.361 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.361 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:47.361 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.361 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.361 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.361 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.361 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.361 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.361 09:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.361 09:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.361 09:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.361 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.361 "name": "Existed_Raid", 00:07:47.361 "uuid": "17a86731-4198-4524-b4ae-4a651818cdba", 00:07:47.361 "strip_size_kb": 64, 00:07:47.361 "state": "offline", 00:07:47.361 "raid_level": "concat", 00:07:47.361 "superblock": false, 00:07:47.361 "num_base_bdevs": 2, 00:07:47.361 "num_base_bdevs_discovered": 1, 00:07:47.361 "num_base_bdevs_operational": 1, 00:07:47.361 "base_bdevs_list": [ 00:07:47.361 { 00:07:47.361 "name": null, 00:07:47.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.361 "is_configured": false, 00:07:47.361 "data_offset": 0, 00:07:47.361 "data_size": 65536 00:07:47.361 }, 00:07:47.361 { 00:07:47.361 "name": "BaseBdev2", 00:07:47.361 "uuid": "486e308c-fa5a-483e-aa3a-40acd455ee9e", 00:07:47.361 "is_configured": true, 00:07:47.361 "data_offset": 0, 00:07:47.361 "data_size": 65536 00:07:47.361 } 00:07:47.361 ] 00:07:47.361 }' 00:07:47.361 09:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.361 09:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.929 09:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:47.929 09:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:47.929 09:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.929 09:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:47.929 09:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.929 09:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.929 09:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.929 09:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:47.929 09:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:47.929 09:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:47.929 09:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.930 09:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.930 [2024-11-15 09:26:36.189617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:47.930 [2024-11-15 09:26:36.189765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:47.930 09:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.930 09:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:47.930 09:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:47.930 09:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.930 09:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.930 09:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:47.930 09:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.930 09:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.930 09:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:47.930 09:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:47.930 09:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:47.930 09:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61969 00:07:47.930 09:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 61969 ']' 00:07:47.930 09:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 61969 00:07:47.930 09:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:07:47.930 09:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:47.930 09:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61969 00:07:48.189 09:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:48.189 09:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:48.189 09:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61969' 00:07:48.189 killing process with pid 61969 00:07:48.189 09:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 61969 00:07:48.189 [2024-11-15 09:26:36.400228] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.189 09:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 61969 00:07:48.189 [2024-11-15 09:26:36.421535] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:49.567 00:07:49.567 real 0m5.414s 00:07:49.567 user 0m7.554s 00:07:49.567 sys 0m0.998s 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:49.567 ************************************ 00:07:49.567 END TEST raid_state_function_test 00:07:49.567 ************************************ 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.567 09:26:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:49.567 09:26:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:49.567 09:26:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:49.567 09:26:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.567 ************************************ 00:07:49.567 START TEST raid_state_function_test_sb 00:07:49.567 ************************************ 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 true 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62222 00:07:49.567 Process raid pid: 62222 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62222' 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62222 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 62222 ']' 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:49.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:49.567 09:26:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.567 [2024-11-15 09:26:37.936223] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:07:49.567 [2024-11-15 09:26:37.936412] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.830 [2024-11-15 09:26:38.118676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.830 [2024-11-15 09:26:38.267000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.090 [2024-11-15 09:26:38.529451] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.090 [2024-11-15 09:26:38.529512] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.658 09:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:50.658 09:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:07:50.658 09:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.658 09:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.658 09:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.658 [2024-11-15 09:26:38.846252] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.658 [2024-11-15 09:26:38.846325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.658 [2024-11-15 09:26:38.846337] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.658 [2024-11-15 09:26:38.846348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.658 09:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.658 09:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:50.658 09:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.658 09:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.658 09:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:50.658 09:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.658 09:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.658 09:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.658 09:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.658 09:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.658 09:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.659 09:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.659 09:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.659 09:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.659 09:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.659 09:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.659 09:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.659 "name": "Existed_Raid", 00:07:50.659 "uuid": "6a12c11d-c0cc-44f5-b9ea-9065b25302d1", 00:07:50.659 "strip_size_kb": 64, 00:07:50.659 "state": "configuring", 00:07:50.659 "raid_level": "concat", 00:07:50.659 "superblock": true, 00:07:50.659 "num_base_bdevs": 2, 00:07:50.659 "num_base_bdevs_discovered": 0, 00:07:50.659 "num_base_bdevs_operational": 2, 00:07:50.659 "base_bdevs_list": [ 00:07:50.659 { 00:07:50.659 "name": "BaseBdev1", 00:07:50.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.659 "is_configured": false, 00:07:50.659 "data_offset": 0, 00:07:50.659 "data_size": 0 00:07:50.659 }, 00:07:50.659 { 00:07:50.659 "name": "BaseBdev2", 00:07:50.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.659 "is_configured": false, 00:07:50.659 "data_offset": 0, 00:07:50.659 "data_size": 0 00:07:50.659 } 00:07:50.659 ] 00:07:50.659 }' 00:07:50.659 09:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.659 09:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.919 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:50.919 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.919 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.919 [2024-11-15 09:26:39.333366] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:50.919 [2024-11-15 09:26:39.333485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:50.919 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.919 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.919 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.919 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.919 [2024-11-15 09:26:39.345382] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.919 [2024-11-15 09:26:39.345498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.919 [2024-11-15 09:26:39.345534] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.919 [2024-11-15 09:26:39.345566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.919 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.919 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:50.919 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.919 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.179 [2024-11-15 09:26:39.408191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.179 BaseBdev1 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.179 [ 00:07:51.179 { 00:07:51.179 "name": "BaseBdev1", 00:07:51.179 "aliases": [ 00:07:51.179 "ff497933-420e-4974-9159-25d4db4a8207" 00:07:51.179 ], 00:07:51.179 "product_name": "Malloc disk", 00:07:51.179 "block_size": 512, 00:07:51.179 "num_blocks": 65536, 00:07:51.179 "uuid": "ff497933-420e-4974-9159-25d4db4a8207", 00:07:51.179 "assigned_rate_limits": { 00:07:51.179 "rw_ios_per_sec": 0, 00:07:51.179 "rw_mbytes_per_sec": 0, 00:07:51.179 "r_mbytes_per_sec": 0, 00:07:51.179 "w_mbytes_per_sec": 0 00:07:51.179 }, 00:07:51.179 "claimed": true, 00:07:51.179 "claim_type": "exclusive_write", 00:07:51.179 "zoned": false, 00:07:51.179 "supported_io_types": { 00:07:51.179 "read": true, 00:07:51.179 "write": true, 00:07:51.179 "unmap": true, 00:07:51.179 "flush": true, 00:07:51.179 "reset": true, 00:07:51.179 "nvme_admin": false, 00:07:51.179 "nvme_io": false, 00:07:51.179 "nvme_io_md": false, 00:07:51.179 "write_zeroes": true, 00:07:51.179 "zcopy": true, 00:07:51.179 "get_zone_info": false, 00:07:51.179 "zone_management": false, 00:07:51.179 "zone_append": false, 00:07:51.179 "compare": false, 00:07:51.179 "compare_and_write": false, 00:07:51.179 "abort": true, 00:07:51.179 "seek_hole": false, 00:07:51.179 "seek_data": false, 00:07:51.179 "copy": true, 00:07:51.179 "nvme_iov_md": false 00:07:51.179 }, 00:07:51.179 "memory_domains": [ 00:07:51.179 { 00:07:51.179 "dma_device_id": "system", 00:07:51.179 "dma_device_type": 1 00:07:51.179 }, 00:07:51.179 { 00:07:51.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.179 "dma_device_type": 2 00:07:51.179 } 00:07:51.179 ], 00:07:51.179 "driver_specific": {} 00:07:51.179 } 00:07:51.179 ] 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.179 "name": "Existed_Raid", 00:07:51.179 "uuid": "1c589472-488b-4d38-af46-ecb31ef8b025", 00:07:51.179 "strip_size_kb": 64, 00:07:51.179 "state": "configuring", 00:07:51.179 "raid_level": "concat", 00:07:51.179 "superblock": true, 00:07:51.179 "num_base_bdevs": 2, 00:07:51.179 "num_base_bdevs_discovered": 1, 00:07:51.179 "num_base_bdevs_operational": 2, 00:07:51.179 "base_bdevs_list": [ 00:07:51.179 { 00:07:51.179 "name": "BaseBdev1", 00:07:51.179 "uuid": "ff497933-420e-4974-9159-25d4db4a8207", 00:07:51.179 "is_configured": true, 00:07:51.179 "data_offset": 2048, 00:07:51.179 "data_size": 63488 00:07:51.179 }, 00:07:51.179 { 00:07:51.179 "name": "BaseBdev2", 00:07:51.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.179 "is_configured": false, 00:07:51.179 "data_offset": 0, 00:07:51.179 "data_size": 0 00:07:51.179 } 00:07:51.179 ] 00:07:51.179 }' 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.179 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.748 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:51.748 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.748 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.748 [2024-11-15 09:26:39.931381] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.748 [2024-11-15 09:26:39.931564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:51.748 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.748 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:51.748 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.748 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.748 [2024-11-15 09:26:39.943512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.748 [2024-11-15 09:26:39.945963] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.748 [2024-11-15 09:26:39.946053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.748 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.748 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:51.748 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:51.748 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:51.748 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.748 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.748 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:51.748 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.748 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.748 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.748 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.748 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.748 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.748 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.748 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.748 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.748 09:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.748 09:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.748 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.748 "name": "Existed_Raid", 00:07:51.748 "uuid": "1f6e2ce0-c32c-4448-9c2b-d51d7236c38e", 00:07:51.748 "strip_size_kb": 64, 00:07:51.748 "state": "configuring", 00:07:51.748 "raid_level": "concat", 00:07:51.748 "superblock": true, 00:07:51.748 "num_base_bdevs": 2, 00:07:51.748 "num_base_bdevs_discovered": 1, 00:07:51.748 "num_base_bdevs_operational": 2, 00:07:51.748 "base_bdevs_list": [ 00:07:51.748 { 00:07:51.748 "name": "BaseBdev1", 00:07:51.748 "uuid": "ff497933-420e-4974-9159-25d4db4a8207", 00:07:51.748 "is_configured": true, 00:07:51.748 "data_offset": 2048, 00:07:51.748 "data_size": 63488 00:07:51.748 }, 00:07:51.748 { 00:07:51.748 "name": "BaseBdev2", 00:07:51.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.748 "is_configured": false, 00:07:51.748 "data_offset": 0, 00:07:51.748 "data_size": 0 00:07:51.748 } 00:07:51.748 ] 00:07:51.748 }' 00:07:51.748 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.748 09:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.006 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:52.006 09:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.006 09:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.006 [2024-11-15 09:26:40.458767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:52.006 [2024-11-15 09:26:40.459140] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:52.006 [2024-11-15 09:26:40.459158] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:52.006 BaseBdev2 00:07:52.006 [2024-11-15 09:26:40.459477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:52.007 [2024-11-15 09:26:40.459653] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:52.007 [2024-11-15 09:26:40.459669] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:52.007 [2024-11-15 09:26:40.459859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.007 09:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.007 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:52.007 09:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:07:52.007 09:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:52.007 09:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:07:52.007 09:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:52.007 09:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:52.007 09:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:07:52.007 09:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.007 09:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.265 [ 00:07:52.265 { 00:07:52.265 "name": "BaseBdev2", 00:07:52.265 "aliases": [ 00:07:52.265 "1f524d9d-7eed-44e0-8731-95c43b361620" 00:07:52.265 ], 00:07:52.265 "product_name": "Malloc disk", 00:07:52.265 "block_size": 512, 00:07:52.265 "num_blocks": 65536, 00:07:52.265 "uuid": "1f524d9d-7eed-44e0-8731-95c43b361620", 00:07:52.265 "assigned_rate_limits": { 00:07:52.265 "rw_ios_per_sec": 0, 00:07:52.265 "rw_mbytes_per_sec": 0, 00:07:52.265 "r_mbytes_per_sec": 0, 00:07:52.265 "w_mbytes_per_sec": 0 00:07:52.265 }, 00:07:52.265 "claimed": true, 00:07:52.265 "claim_type": "exclusive_write", 00:07:52.265 "zoned": false, 00:07:52.265 "supported_io_types": { 00:07:52.265 "read": true, 00:07:52.265 "write": true, 00:07:52.265 "unmap": true, 00:07:52.265 "flush": true, 00:07:52.265 "reset": true, 00:07:52.265 "nvme_admin": false, 00:07:52.265 "nvme_io": false, 00:07:52.265 "nvme_io_md": false, 00:07:52.265 "write_zeroes": true, 00:07:52.265 "zcopy": true, 00:07:52.265 "get_zone_info": false, 00:07:52.265 "zone_management": false, 00:07:52.265 "zone_append": false, 00:07:52.265 "compare": false, 00:07:52.265 "compare_and_write": false, 00:07:52.265 "abort": true, 00:07:52.265 "seek_hole": false, 00:07:52.265 "seek_data": false, 00:07:52.265 "copy": true, 00:07:52.265 "nvme_iov_md": false 00:07:52.265 }, 00:07:52.265 "memory_domains": [ 00:07:52.265 { 00:07:52.265 "dma_device_id": "system", 00:07:52.265 "dma_device_type": 1 00:07:52.265 }, 00:07:52.265 { 00:07:52.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.265 "dma_device_type": 2 00:07:52.265 } 00:07:52.265 ], 00:07:52.265 "driver_specific": {} 00:07:52.265 } 00:07:52.265 ] 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.265 "name": "Existed_Raid", 00:07:52.265 "uuid": "1f6e2ce0-c32c-4448-9c2b-d51d7236c38e", 00:07:52.265 "strip_size_kb": 64, 00:07:52.265 "state": "online", 00:07:52.265 "raid_level": "concat", 00:07:52.265 "superblock": true, 00:07:52.265 "num_base_bdevs": 2, 00:07:52.265 "num_base_bdevs_discovered": 2, 00:07:52.265 "num_base_bdevs_operational": 2, 00:07:52.265 "base_bdevs_list": [ 00:07:52.265 { 00:07:52.265 "name": "BaseBdev1", 00:07:52.265 "uuid": "ff497933-420e-4974-9159-25d4db4a8207", 00:07:52.265 "is_configured": true, 00:07:52.265 "data_offset": 2048, 00:07:52.265 "data_size": 63488 00:07:52.265 }, 00:07:52.265 { 00:07:52.265 "name": "BaseBdev2", 00:07:52.265 "uuid": "1f524d9d-7eed-44e0-8731-95c43b361620", 00:07:52.265 "is_configured": true, 00:07:52.265 "data_offset": 2048, 00:07:52.265 "data_size": 63488 00:07:52.265 } 00:07:52.265 ] 00:07:52.265 }' 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.265 09:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.523 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:52.523 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:52.523 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:52.523 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:52.523 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:52.523 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:52.523 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:52.523 09:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:52.523 09:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.523 09:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.783 [2024-11-15 09:26:40.994314] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:52.783 09:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.783 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:52.783 "name": "Existed_Raid", 00:07:52.783 "aliases": [ 00:07:52.783 "1f6e2ce0-c32c-4448-9c2b-d51d7236c38e" 00:07:52.783 ], 00:07:52.783 "product_name": "Raid Volume", 00:07:52.783 "block_size": 512, 00:07:52.783 "num_blocks": 126976, 00:07:52.783 "uuid": "1f6e2ce0-c32c-4448-9c2b-d51d7236c38e", 00:07:52.783 "assigned_rate_limits": { 00:07:52.783 "rw_ios_per_sec": 0, 00:07:52.783 "rw_mbytes_per_sec": 0, 00:07:52.783 "r_mbytes_per_sec": 0, 00:07:52.783 "w_mbytes_per_sec": 0 00:07:52.783 }, 00:07:52.783 "claimed": false, 00:07:52.783 "zoned": false, 00:07:52.783 "supported_io_types": { 00:07:52.783 "read": true, 00:07:52.783 "write": true, 00:07:52.783 "unmap": true, 00:07:52.783 "flush": true, 00:07:52.783 "reset": true, 00:07:52.783 "nvme_admin": false, 00:07:52.783 "nvme_io": false, 00:07:52.783 "nvme_io_md": false, 00:07:52.783 "write_zeroes": true, 00:07:52.783 "zcopy": false, 00:07:52.783 "get_zone_info": false, 00:07:52.783 "zone_management": false, 00:07:52.783 "zone_append": false, 00:07:52.783 "compare": false, 00:07:52.783 "compare_and_write": false, 00:07:52.783 "abort": false, 00:07:52.783 "seek_hole": false, 00:07:52.783 "seek_data": false, 00:07:52.783 "copy": false, 00:07:52.783 "nvme_iov_md": false 00:07:52.783 }, 00:07:52.783 "memory_domains": [ 00:07:52.783 { 00:07:52.783 "dma_device_id": "system", 00:07:52.783 "dma_device_type": 1 00:07:52.783 }, 00:07:52.783 { 00:07:52.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.783 "dma_device_type": 2 00:07:52.783 }, 00:07:52.783 { 00:07:52.783 "dma_device_id": "system", 00:07:52.783 "dma_device_type": 1 00:07:52.783 }, 00:07:52.783 { 00:07:52.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.783 "dma_device_type": 2 00:07:52.783 } 00:07:52.783 ], 00:07:52.783 "driver_specific": { 00:07:52.783 "raid": { 00:07:52.783 "uuid": "1f6e2ce0-c32c-4448-9c2b-d51d7236c38e", 00:07:52.783 "strip_size_kb": 64, 00:07:52.783 "state": "online", 00:07:52.783 "raid_level": "concat", 00:07:52.783 "superblock": true, 00:07:52.783 "num_base_bdevs": 2, 00:07:52.783 "num_base_bdevs_discovered": 2, 00:07:52.783 "num_base_bdevs_operational": 2, 00:07:52.783 "base_bdevs_list": [ 00:07:52.783 { 00:07:52.783 "name": "BaseBdev1", 00:07:52.784 "uuid": "ff497933-420e-4974-9159-25d4db4a8207", 00:07:52.784 "is_configured": true, 00:07:52.784 "data_offset": 2048, 00:07:52.784 "data_size": 63488 00:07:52.784 }, 00:07:52.784 { 00:07:52.784 "name": "BaseBdev2", 00:07:52.784 "uuid": "1f524d9d-7eed-44e0-8731-95c43b361620", 00:07:52.784 "is_configured": true, 00:07:52.784 "data_offset": 2048, 00:07:52.784 "data_size": 63488 00:07:52.784 } 00:07:52.784 ] 00:07:52.784 } 00:07:52.784 } 00:07:52.784 }' 00:07:52.784 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:52.784 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:52.784 BaseBdev2' 00:07:52.784 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.784 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:52.784 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.784 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.784 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:52.784 09:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.784 09:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.784 09:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.784 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.784 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.784 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.784 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:52.784 09:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.784 09:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.784 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.784 09:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.784 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.784 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.784 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:52.784 09:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.784 09:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.784 [2024-11-15 09:26:41.233623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:52.784 [2024-11-15 09:26:41.233723] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.784 [2024-11-15 09:26:41.233800] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.043 09:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.043 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:53.043 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:53.043 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:53.043 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:53.043 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:53.043 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:53.043 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.043 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:53.043 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:53.043 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.043 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:53.043 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.043 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.043 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.043 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.043 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.043 09:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.043 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.043 09:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.043 09:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.043 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.043 "name": "Existed_Raid", 00:07:53.043 "uuid": "1f6e2ce0-c32c-4448-9c2b-d51d7236c38e", 00:07:53.043 "strip_size_kb": 64, 00:07:53.043 "state": "offline", 00:07:53.043 "raid_level": "concat", 00:07:53.043 "superblock": true, 00:07:53.043 "num_base_bdevs": 2, 00:07:53.043 "num_base_bdevs_discovered": 1, 00:07:53.043 "num_base_bdevs_operational": 1, 00:07:53.043 "base_bdevs_list": [ 00:07:53.043 { 00:07:53.043 "name": null, 00:07:53.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.043 "is_configured": false, 00:07:53.043 "data_offset": 0, 00:07:53.043 "data_size": 63488 00:07:53.043 }, 00:07:53.043 { 00:07:53.043 "name": "BaseBdev2", 00:07:53.043 "uuid": "1f524d9d-7eed-44e0-8731-95c43b361620", 00:07:53.043 "is_configured": true, 00:07:53.043 "data_offset": 2048, 00:07:53.043 "data_size": 63488 00:07:53.043 } 00:07:53.043 ] 00:07:53.043 }' 00:07:53.043 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.043 09:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.611 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:53.611 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:53.611 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.611 09:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.612 09:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.612 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:53.612 09:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.612 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:53.612 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:53.612 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:53.612 09:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.612 09:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.612 [2024-11-15 09:26:41.842357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:53.612 [2024-11-15 09:26:41.842432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:53.612 09:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.612 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:53.612 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:53.612 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.612 09:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.612 09:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:53.612 09:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.612 09:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.612 09:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:53.612 09:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:53.612 09:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:53.612 09:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62222 00:07:53.612 09:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 62222 ']' 00:07:53.612 09:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 62222 00:07:53.612 09:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:07:53.612 09:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:53.612 09:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62222 00:07:53.612 09:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:53.612 09:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:53.612 09:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62222' 00:07:53.612 killing process with pid 62222 00:07:53.612 09:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 62222 00:07:53.612 [2024-11-15 09:26:42.049228] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:53.612 09:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 62222 00:07:53.612 [2024-11-15 09:26:42.069294] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:55.530 09:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:55.530 00:07:55.530 real 0m5.654s 00:07:55.530 user 0m7.938s 00:07:55.530 sys 0m1.013s 00:07:55.530 09:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:55.530 09:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.530 ************************************ 00:07:55.530 END TEST raid_state_function_test_sb 00:07:55.530 ************************************ 00:07:55.530 09:26:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:55.530 09:26:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:55.530 09:26:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:55.530 09:26:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:55.530 ************************************ 00:07:55.530 START TEST raid_superblock_test 00:07:55.530 ************************************ 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 2 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62480 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62480 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 62480 ']' 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:55.530 09:26:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.530 [2024-11-15 09:26:43.638518] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:07:55.530 [2024-11-15 09:26:43.638671] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62480 ] 00:07:55.530 [2024-11-15 09:26:43.835733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.790 [2024-11-15 09:26:43.999182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.050 [2024-11-15 09:26:44.284493] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.050 [2024-11-15 09:26:44.284680] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.309 malloc1 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.309 [2024-11-15 09:26:44.666143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:56.309 [2024-11-15 09:26:44.666312] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.309 [2024-11-15 09:26:44.666377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:56.309 [2024-11-15 09:26:44.666420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.309 [2024-11-15 09:26:44.669401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.309 [2024-11-15 09:26:44.669505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:56.309 pt1 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.309 malloc2 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.309 [2024-11-15 09:26:44.731247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:56.309 [2024-11-15 09:26:44.731323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.309 [2024-11-15 09:26:44.731350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:56.309 [2024-11-15 09:26:44.731361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.309 [2024-11-15 09:26:44.734452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.309 [2024-11-15 09:26:44.734520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:56.309 pt2 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.309 [2024-11-15 09:26:44.739492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:56.309 [2024-11-15 09:26:44.742092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:56.309 [2024-11-15 09:26:44.742335] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:56.309 [2024-11-15 09:26:44.742351] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:56.309 [2024-11-15 09:26:44.742719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:56.309 [2024-11-15 09:26:44.743077] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:56.309 [2024-11-15 09:26:44.743140] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:56.309 [2024-11-15 09:26:44.743518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.309 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.310 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.310 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.310 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.310 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.310 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.310 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.310 09:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.310 09:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.310 09:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.568 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.568 "name": "raid_bdev1", 00:07:56.568 "uuid": "ad888d3d-2cbe-4463-bd86-1a0d6049e69e", 00:07:56.568 "strip_size_kb": 64, 00:07:56.568 "state": "online", 00:07:56.568 "raid_level": "concat", 00:07:56.568 "superblock": true, 00:07:56.568 "num_base_bdevs": 2, 00:07:56.568 "num_base_bdevs_discovered": 2, 00:07:56.568 "num_base_bdevs_operational": 2, 00:07:56.568 "base_bdevs_list": [ 00:07:56.568 { 00:07:56.568 "name": "pt1", 00:07:56.568 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.568 "is_configured": true, 00:07:56.568 "data_offset": 2048, 00:07:56.568 "data_size": 63488 00:07:56.568 }, 00:07:56.568 { 00:07:56.568 "name": "pt2", 00:07:56.568 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.568 "is_configured": true, 00:07:56.568 "data_offset": 2048, 00:07:56.568 "data_size": 63488 00:07:56.568 } 00:07:56.568 ] 00:07:56.568 }' 00:07:56.568 09:26:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.568 09:26:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.828 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:56.828 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:56.828 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:56.828 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:56.828 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:56.828 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:56.828 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:56.828 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.828 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.828 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.828 [2024-11-15 09:26:45.223173] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.828 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.828 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:56.828 "name": "raid_bdev1", 00:07:56.828 "aliases": [ 00:07:56.828 "ad888d3d-2cbe-4463-bd86-1a0d6049e69e" 00:07:56.828 ], 00:07:56.828 "product_name": "Raid Volume", 00:07:56.828 "block_size": 512, 00:07:56.828 "num_blocks": 126976, 00:07:56.828 "uuid": "ad888d3d-2cbe-4463-bd86-1a0d6049e69e", 00:07:56.828 "assigned_rate_limits": { 00:07:56.828 "rw_ios_per_sec": 0, 00:07:56.828 "rw_mbytes_per_sec": 0, 00:07:56.828 "r_mbytes_per_sec": 0, 00:07:56.828 "w_mbytes_per_sec": 0 00:07:56.828 }, 00:07:56.828 "claimed": false, 00:07:56.828 "zoned": false, 00:07:56.828 "supported_io_types": { 00:07:56.828 "read": true, 00:07:56.828 "write": true, 00:07:56.828 "unmap": true, 00:07:56.828 "flush": true, 00:07:56.828 "reset": true, 00:07:56.828 "nvme_admin": false, 00:07:56.828 "nvme_io": false, 00:07:56.828 "nvme_io_md": false, 00:07:56.828 "write_zeroes": true, 00:07:56.828 "zcopy": false, 00:07:56.828 "get_zone_info": false, 00:07:56.828 "zone_management": false, 00:07:56.828 "zone_append": false, 00:07:56.828 "compare": false, 00:07:56.828 "compare_and_write": false, 00:07:56.828 "abort": false, 00:07:56.828 "seek_hole": false, 00:07:56.828 "seek_data": false, 00:07:56.828 "copy": false, 00:07:56.828 "nvme_iov_md": false 00:07:56.828 }, 00:07:56.828 "memory_domains": [ 00:07:56.828 { 00:07:56.828 "dma_device_id": "system", 00:07:56.828 "dma_device_type": 1 00:07:56.828 }, 00:07:56.828 { 00:07:56.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.828 "dma_device_type": 2 00:07:56.828 }, 00:07:56.828 { 00:07:56.828 "dma_device_id": "system", 00:07:56.828 "dma_device_type": 1 00:07:56.828 }, 00:07:56.828 { 00:07:56.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.828 "dma_device_type": 2 00:07:56.828 } 00:07:56.828 ], 00:07:56.828 "driver_specific": { 00:07:56.828 "raid": { 00:07:56.828 "uuid": "ad888d3d-2cbe-4463-bd86-1a0d6049e69e", 00:07:56.828 "strip_size_kb": 64, 00:07:56.828 "state": "online", 00:07:56.828 "raid_level": "concat", 00:07:56.828 "superblock": true, 00:07:56.828 "num_base_bdevs": 2, 00:07:56.828 "num_base_bdevs_discovered": 2, 00:07:56.828 "num_base_bdevs_operational": 2, 00:07:56.828 "base_bdevs_list": [ 00:07:56.828 { 00:07:56.828 "name": "pt1", 00:07:56.828 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.828 "is_configured": true, 00:07:56.828 "data_offset": 2048, 00:07:56.828 "data_size": 63488 00:07:56.828 }, 00:07:56.828 { 00:07:56.828 "name": "pt2", 00:07:56.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.828 "is_configured": true, 00:07:56.828 "data_offset": 2048, 00:07:56.828 "data_size": 63488 00:07:56.828 } 00:07:56.828 ] 00:07:56.828 } 00:07:56.828 } 00:07:56.828 }' 00:07:56.828 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:57.089 pt2' 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:57.089 [2024-11-15 09:26:45.442690] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ad888d3d-2cbe-4463-bd86-1a0d6049e69e 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ad888d3d-2cbe-4463-bd86-1a0d6049e69e ']' 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.089 [2024-11-15 09:26:45.490286] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.089 [2024-11-15 09:26:45.490321] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.089 [2024-11-15 09:26:45.490428] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.089 [2024-11-15 09:26:45.490494] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.089 [2024-11-15 09:26:45.490507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.089 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.090 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:57.090 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.090 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.090 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.090 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:57.090 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:57.090 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:57.090 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:57.090 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.090 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.349 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.349 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:57.349 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:57.349 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.349 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.349 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.349 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:57.349 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:57.349 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.349 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.349 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.349 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:57.349 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:57.349 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:57.349 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:57.349 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:57.349 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.349 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:57.349 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.349 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:57.349 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.349 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.349 [2024-11-15 09:26:45.622160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:57.349 [2024-11-15 09:26:45.624552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:57.349 [2024-11-15 09:26:45.624646] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:57.349 [2024-11-15 09:26:45.624713] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:57.349 [2024-11-15 09:26:45.624730] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.349 [2024-11-15 09:26:45.624742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:57.349 request: 00:07:57.349 { 00:07:57.349 "name": "raid_bdev1", 00:07:57.350 "raid_level": "concat", 00:07:57.350 "base_bdevs": [ 00:07:57.350 "malloc1", 00:07:57.350 "malloc2" 00:07:57.350 ], 00:07:57.350 "strip_size_kb": 64, 00:07:57.350 "superblock": false, 00:07:57.350 "method": "bdev_raid_create", 00:07:57.350 "req_id": 1 00:07:57.350 } 00:07:57.350 Got JSON-RPC error response 00:07:57.350 response: 00:07:57.350 { 00:07:57.350 "code": -17, 00:07:57.350 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:57.350 } 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.350 [2024-11-15 09:26:45.690039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:57.350 [2024-11-15 09:26:45.690128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.350 [2024-11-15 09:26:45.690155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:57.350 [2024-11-15 09:26:45.690169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.350 [2024-11-15 09:26:45.693101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.350 [2024-11-15 09:26:45.693151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:57.350 [2024-11-15 09:26:45.693265] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:57.350 [2024-11-15 09:26:45.693343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:57.350 pt1 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.350 "name": "raid_bdev1", 00:07:57.350 "uuid": "ad888d3d-2cbe-4463-bd86-1a0d6049e69e", 00:07:57.350 "strip_size_kb": 64, 00:07:57.350 "state": "configuring", 00:07:57.350 "raid_level": "concat", 00:07:57.350 "superblock": true, 00:07:57.350 "num_base_bdevs": 2, 00:07:57.350 "num_base_bdevs_discovered": 1, 00:07:57.350 "num_base_bdevs_operational": 2, 00:07:57.350 "base_bdevs_list": [ 00:07:57.350 { 00:07:57.350 "name": "pt1", 00:07:57.350 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:57.350 "is_configured": true, 00:07:57.350 "data_offset": 2048, 00:07:57.350 "data_size": 63488 00:07:57.350 }, 00:07:57.350 { 00:07:57.350 "name": null, 00:07:57.350 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.350 "is_configured": false, 00:07:57.350 "data_offset": 2048, 00:07:57.350 "data_size": 63488 00:07:57.350 } 00:07:57.350 ] 00:07:57.350 }' 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.350 09:26:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.919 [2024-11-15 09:26:46.161225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:57.919 [2024-11-15 09:26:46.161329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.919 [2024-11-15 09:26:46.161357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:57.919 [2024-11-15 09:26:46.161372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.919 [2024-11-15 09:26:46.161999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.919 [2024-11-15 09:26:46.162033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:57.919 [2024-11-15 09:26:46.162139] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:57.919 [2024-11-15 09:26:46.162175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:57.919 [2024-11-15 09:26:46.162325] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:57.919 [2024-11-15 09:26:46.162347] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:57.919 [2024-11-15 09:26:46.162641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:57.919 [2024-11-15 09:26:46.162873] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:57.919 [2024-11-15 09:26:46.162892] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:57.919 [2024-11-15 09:26:46.163057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.919 pt2 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.919 "name": "raid_bdev1", 00:07:57.919 "uuid": "ad888d3d-2cbe-4463-bd86-1a0d6049e69e", 00:07:57.919 "strip_size_kb": 64, 00:07:57.919 "state": "online", 00:07:57.919 "raid_level": "concat", 00:07:57.919 "superblock": true, 00:07:57.919 "num_base_bdevs": 2, 00:07:57.919 "num_base_bdevs_discovered": 2, 00:07:57.919 "num_base_bdevs_operational": 2, 00:07:57.919 "base_bdevs_list": [ 00:07:57.919 { 00:07:57.919 "name": "pt1", 00:07:57.919 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:57.919 "is_configured": true, 00:07:57.919 "data_offset": 2048, 00:07:57.919 "data_size": 63488 00:07:57.919 }, 00:07:57.919 { 00:07:57.919 "name": "pt2", 00:07:57.919 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.919 "is_configured": true, 00:07:57.919 "data_offset": 2048, 00:07:57.919 "data_size": 63488 00:07:57.919 } 00:07:57.919 ] 00:07:57.919 }' 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.919 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.179 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:58.179 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:58.179 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:58.179 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:58.179 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:58.179 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:58.179 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:58.179 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:58.179 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.179 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.179 [2024-11-15 09:26:46.636687] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:58.439 "name": "raid_bdev1", 00:07:58.439 "aliases": [ 00:07:58.439 "ad888d3d-2cbe-4463-bd86-1a0d6049e69e" 00:07:58.439 ], 00:07:58.439 "product_name": "Raid Volume", 00:07:58.439 "block_size": 512, 00:07:58.439 "num_blocks": 126976, 00:07:58.439 "uuid": "ad888d3d-2cbe-4463-bd86-1a0d6049e69e", 00:07:58.439 "assigned_rate_limits": { 00:07:58.439 "rw_ios_per_sec": 0, 00:07:58.439 "rw_mbytes_per_sec": 0, 00:07:58.439 "r_mbytes_per_sec": 0, 00:07:58.439 "w_mbytes_per_sec": 0 00:07:58.439 }, 00:07:58.439 "claimed": false, 00:07:58.439 "zoned": false, 00:07:58.439 "supported_io_types": { 00:07:58.439 "read": true, 00:07:58.439 "write": true, 00:07:58.439 "unmap": true, 00:07:58.439 "flush": true, 00:07:58.439 "reset": true, 00:07:58.439 "nvme_admin": false, 00:07:58.439 "nvme_io": false, 00:07:58.439 "nvme_io_md": false, 00:07:58.439 "write_zeroes": true, 00:07:58.439 "zcopy": false, 00:07:58.439 "get_zone_info": false, 00:07:58.439 "zone_management": false, 00:07:58.439 "zone_append": false, 00:07:58.439 "compare": false, 00:07:58.439 "compare_and_write": false, 00:07:58.439 "abort": false, 00:07:58.439 "seek_hole": false, 00:07:58.439 "seek_data": false, 00:07:58.439 "copy": false, 00:07:58.439 "nvme_iov_md": false 00:07:58.439 }, 00:07:58.439 "memory_domains": [ 00:07:58.439 { 00:07:58.439 "dma_device_id": "system", 00:07:58.439 "dma_device_type": 1 00:07:58.439 }, 00:07:58.439 { 00:07:58.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.439 "dma_device_type": 2 00:07:58.439 }, 00:07:58.439 { 00:07:58.439 "dma_device_id": "system", 00:07:58.439 "dma_device_type": 1 00:07:58.439 }, 00:07:58.439 { 00:07:58.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.439 "dma_device_type": 2 00:07:58.439 } 00:07:58.439 ], 00:07:58.439 "driver_specific": { 00:07:58.439 "raid": { 00:07:58.439 "uuid": "ad888d3d-2cbe-4463-bd86-1a0d6049e69e", 00:07:58.439 "strip_size_kb": 64, 00:07:58.439 "state": "online", 00:07:58.439 "raid_level": "concat", 00:07:58.439 "superblock": true, 00:07:58.439 "num_base_bdevs": 2, 00:07:58.439 "num_base_bdevs_discovered": 2, 00:07:58.439 "num_base_bdevs_operational": 2, 00:07:58.439 "base_bdevs_list": [ 00:07:58.439 { 00:07:58.439 "name": "pt1", 00:07:58.439 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:58.439 "is_configured": true, 00:07:58.439 "data_offset": 2048, 00:07:58.439 "data_size": 63488 00:07:58.439 }, 00:07:58.439 { 00:07:58.439 "name": "pt2", 00:07:58.439 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.439 "is_configured": true, 00:07:58.439 "data_offset": 2048, 00:07:58.439 "data_size": 63488 00:07:58.439 } 00:07:58.439 ] 00:07:58.439 } 00:07:58.439 } 00:07:58.439 }' 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:58.439 pt2' 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.439 [2024-11-15 09:26:46.840371] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ad888d3d-2cbe-4463-bd86-1a0d6049e69e '!=' ad888d3d-2cbe-4463-bd86-1a0d6049e69e ']' 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62480 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 62480 ']' 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 62480 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62480 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:58.439 killing process with pid 62480 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62480' 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 62480 00:07:58.439 [2024-11-15 09:26:46.898149] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:58.439 [2024-11-15 09:26:46.898281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.439 [2024-11-15 09:26:46.898353] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:58.439 [2024-11-15 09:26:46.898367] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:58.439 09:26:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 62480 00:07:58.698 [2024-11-15 09:26:47.137320] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:00.075 09:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:00.075 00:08:00.075 real 0m4.896s 00:08:00.075 user 0m6.719s 00:08:00.075 sys 0m0.914s 00:08:00.075 09:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:00.075 09:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.075 ************************************ 00:08:00.075 END TEST raid_superblock_test 00:08:00.075 ************************************ 00:08:00.075 09:26:48 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:00.075 09:26:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:00.075 09:26:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:00.075 09:26:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:00.075 ************************************ 00:08:00.075 START TEST raid_read_error_test 00:08:00.075 ************************************ 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 read 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xLetPD4r42 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62697 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62697 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 62697 ']' 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:00.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:00.075 09:26:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.334 [2024-11-15 09:26:48.620111] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:08:00.335 [2024-11-15 09:26:48.620379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62697 ] 00:08:00.335 [2024-11-15 09:26:48.784655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.593 [2024-11-15 09:26:48.928141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.852 [2024-11-15 09:26:49.177722] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.852 [2024-11-15 09:26:49.177811] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.112 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:01.112 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:01.112 09:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:01.112 09:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:01.112 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.112 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.112 BaseBdev1_malloc 00:08:01.112 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.112 09:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:01.112 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.112 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.490 true 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.490 [2024-11-15 09:26:49.586671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:01.490 [2024-11-15 09:26:49.586755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.490 [2024-11-15 09:26:49.586785] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:01.490 [2024-11-15 09:26:49.586800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.490 [2024-11-15 09:26:49.589707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.490 [2024-11-15 09:26:49.589821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:01.490 BaseBdev1 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.490 BaseBdev2_malloc 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.490 true 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.490 [2024-11-15 09:26:49.663752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:01.490 [2024-11-15 09:26:49.663836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.490 [2024-11-15 09:26:49.663877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:01.490 [2024-11-15 09:26:49.663892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.490 [2024-11-15 09:26:49.666715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.490 [2024-11-15 09:26:49.666775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:01.490 BaseBdev2 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.490 [2024-11-15 09:26:49.675906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:01.490 [2024-11-15 09:26:49.678528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:01.490 [2024-11-15 09:26:49.678790] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:01.490 [2024-11-15 09:26:49.678810] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:01.490 [2024-11-15 09:26:49.679158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:01.490 [2024-11-15 09:26:49.679390] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:01.490 [2024-11-15 09:26:49.679405] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:01.490 [2024-11-15 09:26:49.679625] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.490 "name": "raid_bdev1", 00:08:01.490 "uuid": "dc73e4e2-45cd-4bb4-95e8-4aafd7ef431e", 00:08:01.490 "strip_size_kb": 64, 00:08:01.490 "state": "online", 00:08:01.490 "raid_level": "concat", 00:08:01.490 "superblock": true, 00:08:01.490 "num_base_bdevs": 2, 00:08:01.490 "num_base_bdevs_discovered": 2, 00:08:01.490 "num_base_bdevs_operational": 2, 00:08:01.490 "base_bdevs_list": [ 00:08:01.490 { 00:08:01.490 "name": "BaseBdev1", 00:08:01.490 "uuid": "fe6cb990-0120-5b84-9cd7-414c86ab9db0", 00:08:01.490 "is_configured": true, 00:08:01.490 "data_offset": 2048, 00:08:01.490 "data_size": 63488 00:08:01.490 }, 00:08:01.490 { 00:08:01.490 "name": "BaseBdev2", 00:08:01.490 "uuid": "f7845667-fe12-58d8-bfa0-43601227321e", 00:08:01.490 "is_configured": true, 00:08:01.490 "data_offset": 2048, 00:08:01.490 "data_size": 63488 00:08:01.490 } 00:08:01.490 ] 00:08:01.490 }' 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.490 09:26:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.764 09:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:01.764 09:26:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:01.764 [2024-11-15 09:26:50.216453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:02.711 09:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:02.711 09:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.711 09:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.711 09:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.711 09:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:02.711 09:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:02.711 09:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:02.711 09:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:02.711 09:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.711 09:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.711 09:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:02.711 09:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.711 09:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.711 09:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.711 09:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.711 09:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.711 09:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.712 09:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.712 09:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.712 09:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.712 09:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.712 09:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.989 09:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.989 "name": "raid_bdev1", 00:08:02.989 "uuid": "dc73e4e2-45cd-4bb4-95e8-4aafd7ef431e", 00:08:02.989 "strip_size_kb": 64, 00:08:02.989 "state": "online", 00:08:02.989 "raid_level": "concat", 00:08:02.989 "superblock": true, 00:08:02.989 "num_base_bdevs": 2, 00:08:02.989 "num_base_bdevs_discovered": 2, 00:08:02.989 "num_base_bdevs_operational": 2, 00:08:02.989 "base_bdevs_list": [ 00:08:02.989 { 00:08:02.989 "name": "BaseBdev1", 00:08:02.989 "uuid": "fe6cb990-0120-5b84-9cd7-414c86ab9db0", 00:08:02.989 "is_configured": true, 00:08:02.989 "data_offset": 2048, 00:08:02.989 "data_size": 63488 00:08:02.989 }, 00:08:02.989 { 00:08:02.989 "name": "BaseBdev2", 00:08:02.989 "uuid": "f7845667-fe12-58d8-bfa0-43601227321e", 00:08:02.989 "is_configured": true, 00:08:02.989 "data_offset": 2048, 00:08:02.989 "data_size": 63488 00:08:02.989 } 00:08:02.989 ] 00:08:02.989 }' 00:08:02.989 09:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.989 09:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.248 09:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:03.248 09:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.248 09:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.248 [2024-11-15 09:26:51.614207] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:03.248 [2024-11-15 09:26:51.614258] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:03.248 [2024-11-15 09:26:51.617452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:03.248 [2024-11-15 09:26:51.617583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.248 [2024-11-15 09:26:51.617650] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:03.248 [2024-11-15 09:26:51.617671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:03.248 { 00:08:03.248 "results": [ 00:08:03.248 { 00:08:03.248 "job": "raid_bdev1", 00:08:03.248 "core_mask": "0x1", 00:08:03.248 "workload": "randrw", 00:08:03.248 "percentage": 50, 00:08:03.248 "status": "finished", 00:08:03.248 "queue_depth": 1, 00:08:03.248 "io_size": 131072, 00:08:03.248 "runtime": 1.398095, 00:08:03.248 "iops": 12660.08389987805, 00:08:03.248 "mibps": 1582.5104874847561, 00:08:03.248 "io_failed": 1, 00:08:03.248 "io_timeout": 0, 00:08:03.248 "avg_latency_us": 110.97173569006166, 00:08:03.248 "min_latency_us": 26.270742358078603, 00:08:03.248 "max_latency_us": 1731.4096069868995 00:08:03.248 } 00:08:03.248 ], 00:08:03.248 "core_count": 1 00:08:03.248 } 00:08:03.248 09:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.248 09:26:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62697 00:08:03.248 09:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 62697 ']' 00:08:03.248 09:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 62697 00:08:03.248 09:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:08:03.248 09:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:03.248 09:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62697 00:08:03.248 09:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:03.248 09:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:03.248 09:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62697' 00:08:03.248 killing process with pid 62697 00:08:03.248 09:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 62697 00:08:03.248 [2024-11-15 09:26:51.652607] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:03.248 09:26:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 62697 00:08:03.509 [2024-11-15 09:26:51.813499] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:04.889 09:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xLetPD4r42 00:08:04.889 09:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:04.889 09:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:04.889 09:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:04.889 09:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:04.889 09:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:04.889 09:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:04.889 09:26:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:04.889 00:08:04.889 real 0m4.663s 00:08:04.889 user 0m5.518s 00:08:04.889 sys 0m0.649s 00:08:04.889 09:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:04.889 ************************************ 00:08:04.889 END TEST raid_read_error_test 00:08:04.889 ************************************ 00:08:04.889 09:26:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.889 09:26:53 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:04.889 09:26:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:04.889 09:26:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:04.889 09:26:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:04.889 ************************************ 00:08:04.889 START TEST raid_write_error_test 00:08:04.889 ************************************ 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 write 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6oGEZJ6bZH 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62837 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:04.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62837 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 62837 ']' 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:04.889 09:26:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.148 [2024-11-15 09:26:53.355737] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:08:05.148 [2024-11-15 09:26:53.356021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62837 ] 00:08:05.148 [2024-11-15 09:26:53.539066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.407 [2024-11-15 09:26:53.686052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.666 [2024-11-15 09:26:53.940029] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.666 [2024-11-15 09:26:53.940248] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.926 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:05.926 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:05.926 09:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:05.926 09:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:05.926 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.926 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.926 BaseBdev1_malloc 00:08:05.926 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.926 09:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:05.926 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.926 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.926 true 00:08:05.926 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.926 09:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:05.926 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.926 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.926 [2024-11-15 09:26:54.308985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:05.926 [2024-11-15 09:26:54.309160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.926 [2024-11-15 09:26:54.309207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:05.926 [2024-11-15 09:26:54.309246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.926 [2024-11-15 09:26:54.311897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.926 [2024-11-15 09:26:54.311974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:05.926 BaseBdev1 00:08:05.926 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.926 09:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:05.926 09:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:05.926 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.926 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.927 BaseBdev2_malloc 00:08:05.927 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.927 09:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:05.927 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.927 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.927 true 00:08:05.927 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.927 09:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:05.927 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.927 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.927 [2024-11-15 09:26:54.384006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:05.927 [2024-11-15 09:26:54.384066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.927 [2024-11-15 09:26:54.384084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:05.927 [2024-11-15 09:26:54.384096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.927 [2024-11-15 09:26:54.386527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.927 [2024-11-15 09:26:54.386569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:05.927 BaseBdev2 00:08:05.927 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.927 09:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:05.927 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.927 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.195 [2024-11-15 09:26:54.396072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.195 [2024-11-15 09:26:54.398459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:06.195 [2024-11-15 09:26:54.398686] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:06.195 [2024-11-15 09:26:54.398702] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:06.195 [2024-11-15 09:26:54.398987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:06.195 [2024-11-15 09:26:54.399215] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:06.195 [2024-11-15 09:26:54.399237] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:06.195 [2024-11-15 09:26:54.399416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.195 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.195 09:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:06.195 09:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:06.195 09:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.195 09:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:06.195 09:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.195 09:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.195 09:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.195 09:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.195 09:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.195 09:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.195 09:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.195 09:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:06.195 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.195 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.195 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.195 09:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.195 "name": "raid_bdev1", 00:08:06.195 "uuid": "79771fa3-fee6-474a-b491-aeef15aa24a1", 00:08:06.195 "strip_size_kb": 64, 00:08:06.195 "state": "online", 00:08:06.195 "raid_level": "concat", 00:08:06.195 "superblock": true, 00:08:06.195 "num_base_bdevs": 2, 00:08:06.195 "num_base_bdevs_discovered": 2, 00:08:06.195 "num_base_bdevs_operational": 2, 00:08:06.195 "base_bdevs_list": [ 00:08:06.195 { 00:08:06.195 "name": "BaseBdev1", 00:08:06.195 "uuid": "ff8cf60e-6f5b-5cf2-9a53-92b3add6f3d2", 00:08:06.195 "is_configured": true, 00:08:06.195 "data_offset": 2048, 00:08:06.195 "data_size": 63488 00:08:06.195 }, 00:08:06.195 { 00:08:06.195 "name": "BaseBdev2", 00:08:06.195 "uuid": "a13a0343-6f18-5320-9a58-56031a4453cd", 00:08:06.195 "is_configured": true, 00:08:06.195 "data_offset": 2048, 00:08:06.195 "data_size": 63488 00:08:06.195 } 00:08:06.195 ] 00:08:06.195 }' 00:08:06.195 09:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.195 09:26:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.456 09:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:06.456 09:26:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:06.716 [2024-11-15 09:26:54.976605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:07.654 09:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:07.654 09:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.654 09:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.654 09:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.654 09:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:07.654 09:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:07.654 09:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:07.654 09:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:07.654 09:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.654 09:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.654 09:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:07.654 09:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.654 09:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.655 09:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.655 09:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.655 09:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.655 09:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.655 09:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.655 09:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.655 09:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.655 09:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.655 09:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.655 09:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.655 "name": "raid_bdev1", 00:08:07.655 "uuid": "79771fa3-fee6-474a-b491-aeef15aa24a1", 00:08:07.655 "strip_size_kb": 64, 00:08:07.655 "state": "online", 00:08:07.655 "raid_level": "concat", 00:08:07.655 "superblock": true, 00:08:07.655 "num_base_bdevs": 2, 00:08:07.655 "num_base_bdevs_discovered": 2, 00:08:07.655 "num_base_bdevs_operational": 2, 00:08:07.655 "base_bdevs_list": [ 00:08:07.655 { 00:08:07.655 "name": "BaseBdev1", 00:08:07.655 "uuid": "ff8cf60e-6f5b-5cf2-9a53-92b3add6f3d2", 00:08:07.655 "is_configured": true, 00:08:07.655 "data_offset": 2048, 00:08:07.655 "data_size": 63488 00:08:07.655 }, 00:08:07.655 { 00:08:07.655 "name": "BaseBdev2", 00:08:07.655 "uuid": "a13a0343-6f18-5320-9a58-56031a4453cd", 00:08:07.655 "is_configured": true, 00:08:07.655 "data_offset": 2048, 00:08:07.655 "data_size": 63488 00:08:07.655 } 00:08:07.655 ] 00:08:07.655 }' 00:08:07.655 09:26:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.655 09:26:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.914 09:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:07.914 09:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.914 09:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.914 [2024-11-15 09:26:56.374373] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:07.914 [2024-11-15 09:26:56.374424] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:07.914 [2024-11-15 09:26:56.377539] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:07.914 [2024-11-15 09:26:56.377590] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.914 [2024-11-15 09:26:56.377628] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:07.914 [2024-11-15 09:26:56.377644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:08.173 { 00:08:08.173 "results": [ 00:08:08.173 { 00:08:08.173 "job": "raid_bdev1", 00:08:08.173 "core_mask": "0x1", 00:08:08.173 "workload": "randrw", 00:08:08.173 "percentage": 50, 00:08:08.173 "status": "finished", 00:08:08.173 "queue_depth": 1, 00:08:08.173 "io_size": 131072, 00:08:08.173 "runtime": 1.397929, 00:08:08.173 "iops": 13134.42957403416, 00:08:08.173 "mibps": 1641.80369675427, 00:08:08.173 "io_failed": 1, 00:08:08.173 "io_timeout": 0, 00:08:08.173 "avg_latency_us": 107.25995978974996, 00:08:08.173 "min_latency_us": 25.9353711790393, 00:08:08.173 "max_latency_us": 1624.0908296943232 00:08:08.173 } 00:08:08.173 ], 00:08:08.173 "core_count": 1 00:08:08.173 } 00:08:08.173 09:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.173 09:26:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62837 00:08:08.173 09:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 62837 ']' 00:08:08.173 09:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 62837 00:08:08.173 09:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:08:08.173 09:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:08.173 09:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62837 00:08:08.173 09:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:08.173 killing process with pid 62837 00:08:08.173 09:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:08.173 09:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62837' 00:08:08.173 09:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 62837 00:08:08.173 [2024-11-15 09:26:56.426608] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:08.173 09:26:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 62837 00:08:08.173 [2024-11-15 09:26:56.581200] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:09.559 09:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6oGEZJ6bZH 00:08:09.559 09:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:09.559 09:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:09.559 09:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:09.559 09:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:09.559 09:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:09.559 09:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:09.559 09:26:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:09.559 00:08:09.559 real 0m4.696s 00:08:09.559 user 0m5.544s 00:08:09.559 sys 0m0.687s 00:08:09.559 09:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:09.559 09:26:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.559 ************************************ 00:08:09.559 END TEST raid_write_error_test 00:08:09.559 ************************************ 00:08:09.559 09:26:57 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:09.559 09:26:57 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:09.559 09:26:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:09.559 09:26:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:09.559 09:26:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:09.559 ************************************ 00:08:09.559 START TEST raid_state_function_test 00:08:09.559 ************************************ 00:08:09.559 09:26:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 false 00:08:09.559 09:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:09.559 09:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:09.560 09:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:09.560 09:26:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62981 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:09.560 Process raid pid: 62981 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62981' 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62981 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 62981 ']' 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:09.560 09:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.819 [2024-11-15 09:26:58.116027] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:08:09.819 [2024-11-15 09:26:58.116324] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.078 [2024-11-15 09:26:58.304314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.078 [2024-11-15 09:26:58.459067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.338 [2024-11-15 09:26:58.723986] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.338 [2024-11-15 09:26:58.724044] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.597 09:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:10.597 09:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:08:10.597 09:26:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:10.597 09:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.597 09:26:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.597 [2024-11-15 09:26:58.998972] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:10.597 [2024-11-15 09:26:58.999111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:10.597 [2024-11-15 09:26:58.999161] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.597 [2024-11-15 09:26:58.999177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.597 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.597 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:10.597 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.597 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.597 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:10.597 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:10.597 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.597 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.597 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.597 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.597 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.597 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.597 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.597 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.597 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.597 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.597 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.597 "name": "Existed_Raid", 00:08:10.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.597 "strip_size_kb": 0, 00:08:10.597 "state": "configuring", 00:08:10.597 "raid_level": "raid1", 00:08:10.597 "superblock": false, 00:08:10.597 "num_base_bdevs": 2, 00:08:10.597 "num_base_bdevs_discovered": 0, 00:08:10.597 "num_base_bdevs_operational": 2, 00:08:10.597 "base_bdevs_list": [ 00:08:10.597 { 00:08:10.597 "name": "BaseBdev1", 00:08:10.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.597 "is_configured": false, 00:08:10.598 "data_offset": 0, 00:08:10.598 "data_size": 0 00:08:10.598 }, 00:08:10.598 { 00:08:10.598 "name": "BaseBdev2", 00:08:10.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.598 "is_configured": false, 00:08:10.598 "data_offset": 0, 00:08:10.598 "data_size": 0 00:08:10.598 } 00:08:10.598 ] 00:08:10.598 }' 00:08:10.598 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.598 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.167 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:11.167 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.167 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.167 [2024-11-15 09:26:59.446154] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:11.167 [2024-11-15 09:26:59.446268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:11.167 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.167 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.168 [2024-11-15 09:26:59.458157] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:11.168 [2024-11-15 09:26:59.458330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:11.168 [2024-11-15 09:26:59.458366] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.168 [2024-11-15 09:26:59.458397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.168 [2024-11-15 09:26:59.517013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:11.168 BaseBdev1 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.168 [ 00:08:11.168 { 00:08:11.168 "name": "BaseBdev1", 00:08:11.168 "aliases": [ 00:08:11.168 "c9f67048-0e1e-4807-9394-8a050ee6b46d" 00:08:11.168 ], 00:08:11.168 "product_name": "Malloc disk", 00:08:11.168 "block_size": 512, 00:08:11.168 "num_blocks": 65536, 00:08:11.168 "uuid": "c9f67048-0e1e-4807-9394-8a050ee6b46d", 00:08:11.168 "assigned_rate_limits": { 00:08:11.168 "rw_ios_per_sec": 0, 00:08:11.168 "rw_mbytes_per_sec": 0, 00:08:11.168 "r_mbytes_per_sec": 0, 00:08:11.168 "w_mbytes_per_sec": 0 00:08:11.168 }, 00:08:11.168 "claimed": true, 00:08:11.168 "claim_type": "exclusive_write", 00:08:11.168 "zoned": false, 00:08:11.168 "supported_io_types": { 00:08:11.168 "read": true, 00:08:11.168 "write": true, 00:08:11.168 "unmap": true, 00:08:11.168 "flush": true, 00:08:11.168 "reset": true, 00:08:11.168 "nvme_admin": false, 00:08:11.168 "nvme_io": false, 00:08:11.168 "nvme_io_md": false, 00:08:11.168 "write_zeroes": true, 00:08:11.168 "zcopy": true, 00:08:11.168 "get_zone_info": false, 00:08:11.168 "zone_management": false, 00:08:11.168 "zone_append": false, 00:08:11.168 "compare": false, 00:08:11.168 "compare_and_write": false, 00:08:11.168 "abort": true, 00:08:11.168 "seek_hole": false, 00:08:11.168 "seek_data": false, 00:08:11.168 "copy": true, 00:08:11.168 "nvme_iov_md": false 00:08:11.168 }, 00:08:11.168 "memory_domains": [ 00:08:11.168 { 00:08:11.168 "dma_device_id": "system", 00:08:11.168 "dma_device_type": 1 00:08:11.168 }, 00:08:11.168 { 00:08:11.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.168 "dma_device_type": 2 00:08:11.168 } 00:08:11.168 ], 00:08:11.168 "driver_specific": {} 00:08:11.168 } 00:08:11.168 ] 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.168 "name": "Existed_Raid", 00:08:11.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.168 "strip_size_kb": 0, 00:08:11.168 "state": "configuring", 00:08:11.168 "raid_level": "raid1", 00:08:11.168 "superblock": false, 00:08:11.168 "num_base_bdevs": 2, 00:08:11.168 "num_base_bdevs_discovered": 1, 00:08:11.168 "num_base_bdevs_operational": 2, 00:08:11.168 "base_bdevs_list": [ 00:08:11.168 { 00:08:11.168 "name": "BaseBdev1", 00:08:11.168 "uuid": "c9f67048-0e1e-4807-9394-8a050ee6b46d", 00:08:11.168 "is_configured": true, 00:08:11.168 "data_offset": 0, 00:08:11.168 "data_size": 65536 00:08:11.168 }, 00:08:11.168 { 00:08:11.168 "name": "BaseBdev2", 00:08:11.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.168 "is_configured": false, 00:08:11.168 "data_offset": 0, 00:08:11.168 "data_size": 0 00:08:11.168 } 00:08:11.168 ] 00:08:11.168 }' 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.168 09:26:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.743 [2024-11-15 09:27:00.060179] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:11.743 [2024-11-15 09:27:00.060252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.743 [2024-11-15 09:27:00.072425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:11.743 [2024-11-15 09:27:00.076564] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.743 [2024-11-15 09:27:00.076655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.743 "name": "Existed_Raid", 00:08:11.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.743 "strip_size_kb": 0, 00:08:11.743 "state": "configuring", 00:08:11.743 "raid_level": "raid1", 00:08:11.743 "superblock": false, 00:08:11.743 "num_base_bdevs": 2, 00:08:11.743 "num_base_bdevs_discovered": 1, 00:08:11.743 "num_base_bdevs_operational": 2, 00:08:11.743 "base_bdevs_list": [ 00:08:11.743 { 00:08:11.743 "name": "BaseBdev1", 00:08:11.743 "uuid": "c9f67048-0e1e-4807-9394-8a050ee6b46d", 00:08:11.743 "is_configured": true, 00:08:11.743 "data_offset": 0, 00:08:11.743 "data_size": 65536 00:08:11.743 }, 00:08:11.743 { 00:08:11.743 "name": "BaseBdev2", 00:08:11.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.743 "is_configured": false, 00:08:11.743 "data_offset": 0, 00:08:11.743 "data_size": 0 00:08:11.743 } 00:08:11.743 ] 00:08:11.743 }' 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.743 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.313 [2024-11-15 09:27:00.585035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:12.313 [2024-11-15 09:27:00.585197] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:12.313 [2024-11-15 09:27:00.585227] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:12.313 [2024-11-15 09:27:00.585568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:12.313 [2024-11-15 09:27:00.585800] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:12.313 [2024-11-15 09:27:00.585878] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:12.313 [2024-11-15 09:27:00.586211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.313 BaseBdev2 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.313 [ 00:08:12.313 { 00:08:12.313 "name": "BaseBdev2", 00:08:12.313 "aliases": [ 00:08:12.313 "fab7ba6f-eaf0-403f-8b68-d7783bc21ad4" 00:08:12.313 ], 00:08:12.313 "product_name": "Malloc disk", 00:08:12.313 "block_size": 512, 00:08:12.313 "num_blocks": 65536, 00:08:12.313 "uuid": "fab7ba6f-eaf0-403f-8b68-d7783bc21ad4", 00:08:12.313 "assigned_rate_limits": { 00:08:12.313 "rw_ios_per_sec": 0, 00:08:12.313 "rw_mbytes_per_sec": 0, 00:08:12.313 "r_mbytes_per_sec": 0, 00:08:12.313 "w_mbytes_per_sec": 0 00:08:12.313 }, 00:08:12.313 "claimed": true, 00:08:12.313 "claim_type": "exclusive_write", 00:08:12.313 "zoned": false, 00:08:12.313 "supported_io_types": { 00:08:12.313 "read": true, 00:08:12.313 "write": true, 00:08:12.313 "unmap": true, 00:08:12.313 "flush": true, 00:08:12.313 "reset": true, 00:08:12.313 "nvme_admin": false, 00:08:12.313 "nvme_io": false, 00:08:12.313 "nvme_io_md": false, 00:08:12.313 "write_zeroes": true, 00:08:12.313 "zcopy": true, 00:08:12.313 "get_zone_info": false, 00:08:12.313 "zone_management": false, 00:08:12.313 "zone_append": false, 00:08:12.313 "compare": false, 00:08:12.313 "compare_and_write": false, 00:08:12.313 "abort": true, 00:08:12.313 "seek_hole": false, 00:08:12.313 "seek_data": false, 00:08:12.313 "copy": true, 00:08:12.313 "nvme_iov_md": false 00:08:12.313 }, 00:08:12.313 "memory_domains": [ 00:08:12.313 { 00:08:12.313 "dma_device_id": "system", 00:08:12.313 "dma_device_type": 1 00:08:12.313 }, 00:08:12.313 { 00:08:12.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.313 "dma_device_type": 2 00:08:12.313 } 00:08:12.313 ], 00:08:12.313 "driver_specific": {} 00:08:12.313 } 00:08:12.313 ] 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.313 "name": "Existed_Raid", 00:08:12.313 "uuid": "277c99e8-99a4-4fcd-b355-d32b27c2adb9", 00:08:12.313 "strip_size_kb": 0, 00:08:12.313 "state": "online", 00:08:12.313 "raid_level": "raid1", 00:08:12.313 "superblock": false, 00:08:12.313 "num_base_bdevs": 2, 00:08:12.313 "num_base_bdevs_discovered": 2, 00:08:12.313 "num_base_bdevs_operational": 2, 00:08:12.313 "base_bdevs_list": [ 00:08:12.313 { 00:08:12.313 "name": "BaseBdev1", 00:08:12.313 "uuid": "c9f67048-0e1e-4807-9394-8a050ee6b46d", 00:08:12.313 "is_configured": true, 00:08:12.313 "data_offset": 0, 00:08:12.313 "data_size": 65536 00:08:12.313 }, 00:08:12.313 { 00:08:12.313 "name": "BaseBdev2", 00:08:12.313 "uuid": "fab7ba6f-eaf0-403f-8b68-d7783bc21ad4", 00:08:12.313 "is_configured": true, 00:08:12.313 "data_offset": 0, 00:08:12.313 "data_size": 65536 00:08:12.313 } 00:08:12.313 ] 00:08:12.313 }' 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.313 09:27:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.880 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:12.880 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:12.880 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:12.880 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:12.880 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:12.880 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:12.880 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:12.880 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:12.880 09:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.880 09:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.880 [2024-11-15 09:27:01.100726] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:12.880 09:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.880 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:12.880 "name": "Existed_Raid", 00:08:12.880 "aliases": [ 00:08:12.880 "277c99e8-99a4-4fcd-b355-d32b27c2adb9" 00:08:12.880 ], 00:08:12.880 "product_name": "Raid Volume", 00:08:12.880 "block_size": 512, 00:08:12.880 "num_blocks": 65536, 00:08:12.880 "uuid": "277c99e8-99a4-4fcd-b355-d32b27c2adb9", 00:08:12.880 "assigned_rate_limits": { 00:08:12.880 "rw_ios_per_sec": 0, 00:08:12.880 "rw_mbytes_per_sec": 0, 00:08:12.880 "r_mbytes_per_sec": 0, 00:08:12.880 "w_mbytes_per_sec": 0 00:08:12.880 }, 00:08:12.880 "claimed": false, 00:08:12.880 "zoned": false, 00:08:12.880 "supported_io_types": { 00:08:12.880 "read": true, 00:08:12.880 "write": true, 00:08:12.880 "unmap": false, 00:08:12.880 "flush": false, 00:08:12.880 "reset": true, 00:08:12.880 "nvme_admin": false, 00:08:12.880 "nvme_io": false, 00:08:12.880 "nvme_io_md": false, 00:08:12.880 "write_zeroes": true, 00:08:12.880 "zcopy": false, 00:08:12.880 "get_zone_info": false, 00:08:12.880 "zone_management": false, 00:08:12.880 "zone_append": false, 00:08:12.880 "compare": false, 00:08:12.880 "compare_and_write": false, 00:08:12.880 "abort": false, 00:08:12.880 "seek_hole": false, 00:08:12.880 "seek_data": false, 00:08:12.880 "copy": false, 00:08:12.880 "nvme_iov_md": false 00:08:12.880 }, 00:08:12.880 "memory_domains": [ 00:08:12.880 { 00:08:12.880 "dma_device_id": "system", 00:08:12.880 "dma_device_type": 1 00:08:12.880 }, 00:08:12.880 { 00:08:12.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.880 "dma_device_type": 2 00:08:12.881 }, 00:08:12.881 { 00:08:12.881 "dma_device_id": "system", 00:08:12.881 "dma_device_type": 1 00:08:12.881 }, 00:08:12.881 { 00:08:12.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.881 "dma_device_type": 2 00:08:12.881 } 00:08:12.881 ], 00:08:12.881 "driver_specific": { 00:08:12.881 "raid": { 00:08:12.881 "uuid": "277c99e8-99a4-4fcd-b355-d32b27c2adb9", 00:08:12.881 "strip_size_kb": 0, 00:08:12.881 "state": "online", 00:08:12.881 "raid_level": "raid1", 00:08:12.881 "superblock": false, 00:08:12.881 "num_base_bdevs": 2, 00:08:12.881 "num_base_bdevs_discovered": 2, 00:08:12.881 "num_base_bdevs_operational": 2, 00:08:12.881 "base_bdevs_list": [ 00:08:12.881 { 00:08:12.881 "name": "BaseBdev1", 00:08:12.881 "uuid": "c9f67048-0e1e-4807-9394-8a050ee6b46d", 00:08:12.881 "is_configured": true, 00:08:12.881 "data_offset": 0, 00:08:12.881 "data_size": 65536 00:08:12.881 }, 00:08:12.881 { 00:08:12.881 "name": "BaseBdev2", 00:08:12.881 "uuid": "fab7ba6f-eaf0-403f-8b68-d7783bc21ad4", 00:08:12.881 "is_configured": true, 00:08:12.881 "data_offset": 0, 00:08:12.881 "data_size": 65536 00:08:12.881 } 00:08:12.881 ] 00:08:12.881 } 00:08:12.881 } 00:08:12.881 }' 00:08:12.881 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:12.881 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:12.881 BaseBdev2' 00:08:12.881 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.881 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:12.881 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.881 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.881 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:12.881 09:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.881 09:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.881 09:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.881 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.881 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.881 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.881 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.881 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:12.881 09:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.881 09:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.881 09:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.881 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.881 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.881 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:12.881 09:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.881 09:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.881 [2024-11-15 09:27:01.312161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:13.140 09:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.140 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:13.140 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:13.140 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:13.140 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:13.140 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:13.140 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:13.140 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.140 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.140 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:13.140 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:13.140 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:13.140 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.140 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.140 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.140 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.140 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.140 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.140 09:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.140 09:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.140 09:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.141 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.141 "name": "Existed_Raid", 00:08:13.141 "uuid": "277c99e8-99a4-4fcd-b355-d32b27c2adb9", 00:08:13.141 "strip_size_kb": 0, 00:08:13.141 "state": "online", 00:08:13.141 "raid_level": "raid1", 00:08:13.141 "superblock": false, 00:08:13.141 "num_base_bdevs": 2, 00:08:13.141 "num_base_bdevs_discovered": 1, 00:08:13.141 "num_base_bdevs_operational": 1, 00:08:13.141 "base_bdevs_list": [ 00:08:13.141 { 00:08:13.141 "name": null, 00:08:13.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.141 "is_configured": false, 00:08:13.141 "data_offset": 0, 00:08:13.141 "data_size": 65536 00:08:13.141 }, 00:08:13.141 { 00:08:13.141 "name": "BaseBdev2", 00:08:13.141 "uuid": "fab7ba6f-eaf0-403f-8b68-d7783bc21ad4", 00:08:13.141 "is_configured": true, 00:08:13.141 "data_offset": 0, 00:08:13.141 "data_size": 65536 00:08:13.141 } 00:08:13.141 ] 00:08:13.141 }' 00:08:13.141 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.141 09:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.399 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:13.399 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.399 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.399 09:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.399 09:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.399 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:13.399 09:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.659 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:13.659 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:13.659 09:27:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:13.659 09:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.659 09:27:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.659 [2024-11-15 09:27:01.902944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:13.659 [2024-11-15 09:27:01.903070] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.659 [2024-11-15 09:27:02.014682] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.659 [2024-11-15 09:27:02.014750] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.659 [2024-11-15 09:27:02.014766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:13.659 09:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.659 09:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:13.659 09:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.659 09:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.659 09:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:13.659 09:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.659 09:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.659 09:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.659 09:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:13.659 09:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:13.659 09:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:13.659 09:27:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62981 00:08:13.659 09:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 62981 ']' 00:08:13.659 09:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 62981 00:08:13.659 09:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:08:13.659 09:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:13.659 09:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62981 00:08:13.659 killing process with pid 62981 00:08:13.659 09:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:13.659 09:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:13.659 09:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62981' 00:08:13.659 09:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 62981 00:08:13.659 [2024-11-15 09:27:02.105889] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:13.659 09:27:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 62981 00:08:13.918 [2024-11-15 09:27:02.125367] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:15.294 00:08:15.294 real 0m5.405s 00:08:15.294 user 0m7.608s 00:08:15.294 sys 0m0.987s 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.294 ************************************ 00:08:15.294 END TEST raid_state_function_test 00:08:15.294 ************************************ 00:08:15.294 09:27:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:15.294 09:27:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:15.294 09:27:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:15.294 09:27:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:15.294 ************************************ 00:08:15.294 START TEST raid_state_function_test_sb 00:08:15.294 ************************************ 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63234 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63234' 00:08:15.294 Process raid pid: 63234 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63234 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 63234 ']' 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:15.294 09:27:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.294 [2024-11-15 09:27:03.586529] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:08:15.294 [2024-11-15 09:27:03.586673] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.553 [2024-11-15 09:27:03.765913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.553 [2024-11-15 09:27:03.916438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.810 [2024-11-15 09:27:04.164600] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.810 [2024-11-15 09:27:04.164664] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.068 09:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:16.068 09:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:08:16.068 09:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:16.068 09:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.068 09:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.068 [2024-11-15 09:27:04.471481] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:16.068 [2024-11-15 09:27:04.471560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:16.068 [2024-11-15 09:27:04.471572] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.068 [2024-11-15 09:27:04.471583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:16.068 09:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.068 09:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:16.068 09:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.068 09:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.068 09:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:16.068 09:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:16.068 09:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.068 09:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.068 09:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.068 09:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.068 09:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.068 09:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.068 09:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.068 09:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.068 09:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.068 09:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.068 09:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.068 "name": "Existed_Raid", 00:08:16.068 "uuid": "2a2e7d09-5716-4b33-acaf-3b9c8965f395", 00:08:16.068 "strip_size_kb": 0, 00:08:16.068 "state": "configuring", 00:08:16.068 "raid_level": "raid1", 00:08:16.068 "superblock": true, 00:08:16.068 "num_base_bdevs": 2, 00:08:16.068 "num_base_bdevs_discovered": 0, 00:08:16.068 "num_base_bdevs_operational": 2, 00:08:16.068 "base_bdevs_list": [ 00:08:16.068 { 00:08:16.068 "name": "BaseBdev1", 00:08:16.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.068 "is_configured": false, 00:08:16.068 "data_offset": 0, 00:08:16.068 "data_size": 0 00:08:16.068 }, 00:08:16.068 { 00:08:16.068 "name": "BaseBdev2", 00:08:16.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.068 "is_configured": false, 00:08:16.068 "data_offset": 0, 00:08:16.068 "data_size": 0 00:08:16.068 } 00:08:16.068 ] 00:08:16.068 }' 00:08:16.068 09:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.068 09:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.637 09:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:16.637 09:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.637 09:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.637 [2024-11-15 09:27:04.962584] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:16.637 [2024-11-15 09:27:04.962701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:16.637 09:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.637 09:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:16.637 09:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.637 09:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.637 [2024-11-15 09:27:04.974516] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:16.637 [2024-11-15 09:27:04.974605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:16.637 [2024-11-15 09:27:04.974639] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.637 [2024-11-15 09:27:04.974667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:16.637 09:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.637 09:27:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:16.637 09:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.637 09:27:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.637 [2024-11-15 09:27:05.030872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:16.637 BaseBdev1 00:08:16.637 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.637 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:16.637 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:16.637 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:16.637 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:16.637 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:16.637 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:16.637 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:16.637 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.637 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.637 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.637 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:16.637 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.637 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.637 [ 00:08:16.637 { 00:08:16.637 "name": "BaseBdev1", 00:08:16.637 "aliases": [ 00:08:16.637 "2857b970-d2bb-4c12-bb79-5ed7a6179ca4" 00:08:16.637 ], 00:08:16.637 "product_name": "Malloc disk", 00:08:16.637 "block_size": 512, 00:08:16.637 "num_blocks": 65536, 00:08:16.637 "uuid": "2857b970-d2bb-4c12-bb79-5ed7a6179ca4", 00:08:16.637 "assigned_rate_limits": { 00:08:16.637 "rw_ios_per_sec": 0, 00:08:16.637 "rw_mbytes_per_sec": 0, 00:08:16.637 "r_mbytes_per_sec": 0, 00:08:16.637 "w_mbytes_per_sec": 0 00:08:16.637 }, 00:08:16.637 "claimed": true, 00:08:16.637 "claim_type": "exclusive_write", 00:08:16.637 "zoned": false, 00:08:16.637 "supported_io_types": { 00:08:16.637 "read": true, 00:08:16.637 "write": true, 00:08:16.637 "unmap": true, 00:08:16.637 "flush": true, 00:08:16.637 "reset": true, 00:08:16.637 "nvme_admin": false, 00:08:16.637 "nvme_io": false, 00:08:16.637 "nvme_io_md": false, 00:08:16.637 "write_zeroes": true, 00:08:16.637 "zcopy": true, 00:08:16.638 "get_zone_info": false, 00:08:16.638 "zone_management": false, 00:08:16.638 "zone_append": false, 00:08:16.638 "compare": false, 00:08:16.638 "compare_and_write": false, 00:08:16.638 "abort": true, 00:08:16.638 "seek_hole": false, 00:08:16.638 "seek_data": false, 00:08:16.638 "copy": true, 00:08:16.638 "nvme_iov_md": false 00:08:16.638 }, 00:08:16.638 "memory_domains": [ 00:08:16.638 { 00:08:16.638 "dma_device_id": "system", 00:08:16.638 "dma_device_type": 1 00:08:16.638 }, 00:08:16.638 { 00:08:16.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.638 "dma_device_type": 2 00:08:16.638 } 00:08:16.638 ], 00:08:16.638 "driver_specific": {} 00:08:16.638 } 00:08:16.638 ] 00:08:16.638 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.638 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:16.638 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:16.638 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.638 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.638 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:16.638 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:16.638 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.638 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.638 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.638 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.638 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.638 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.638 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.638 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.638 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.638 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.898 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.898 "name": "Existed_Raid", 00:08:16.898 "uuid": "4f8de940-ae85-426d-84d4-06d03224cb91", 00:08:16.898 "strip_size_kb": 0, 00:08:16.898 "state": "configuring", 00:08:16.898 "raid_level": "raid1", 00:08:16.898 "superblock": true, 00:08:16.898 "num_base_bdevs": 2, 00:08:16.898 "num_base_bdevs_discovered": 1, 00:08:16.898 "num_base_bdevs_operational": 2, 00:08:16.898 "base_bdevs_list": [ 00:08:16.898 { 00:08:16.898 "name": "BaseBdev1", 00:08:16.898 "uuid": "2857b970-d2bb-4c12-bb79-5ed7a6179ca4", 00:08:16.898 "is_configured": true, 00:08:16.898 "data_offset": 2048, 00:08:16.898 "data_size": 63488 00:08:16.898 }, 00:08:16.898 { 00:08:16.898 "name": "BaseBdev2", 00:08:16.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.898 "is_configured": false, 00:08:16.898 "data_offset": 0, 00:08:16.898 "data_size": 0 00:08:16.898 } 00:08:16.898 ] 00:08:16.898 }' 00:08:16.898 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.898 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.158 [2024-11-15 09:27:05.530078] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:17.158 [2024-11-15 09:27:05.530212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.158 [2024-11-15 09:27:05.542110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:17.158 [2024-11-15 09:27:05.544561] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:17.158 [2024-11-15 09:27:05.544657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.158 "name": "Existed_Raid", 00:08:17.158 "uuid": "09aa311e-916c-4134-ab37-f162817d9758", 00:08:17.158 "strip_size_kb": 0, 00:08:17.158 "state": "configuring", 00:08:17.158 "raid_level": "raid1", 00:08:17.158 "superblock": true, 00:08:17.158 "num_base_bdevs": 2, 00:08:17.158 "num_base_bdevs_discovered": 1, 00:08:17.158 "num_base_bdevs_operational": 2, 00:08:17.158 "base_bdevs_list": [ 00:08:17.158 { 00:08:17.158 "name": "BaseBdev1", 00:08:17.158 "uuid": "2857b970-d2bb-4c12-bb79-5ed7a6179ca4", 00:08:17.158 "is_configured": true, 00:08:17.158 "data_offset": 2048, 00:08:17.158 "data_size": 63488 00:08:17.158 }, 00:08:17.158 { 00:08:17.158 "name": "BaseBdev2", 00:08:17.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.158 "is_configured": false, 00:08:17.158 "data_offset": 0, 00:08:17.158 "data_size": 0 00:08:17.158 } 00:08:17.158 ] 00:08:17.158 }' 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.158 09:27:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.724 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:17.724 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.724 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.724 [2024-11-15 09:27:06.050424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.724 [2024-11-15 09:27:06.050930] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:17.724 [2024-11-15 09:27:06.050992] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:17.724 [2024-11-15 09:27:06.051328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:17.724 [2024-11-15 09:27:06.051555] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:17.724 [2024-11-15 09:27:06.051604] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:17.724 BaseBdev2 00:08:17.724 [2024-11-15 09:27:06.051810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.724 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.724 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:17.724 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:17.724 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:17.724 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:17.724 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:17.724 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:17.724 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:17.724 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.724 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.724 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.724 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:17.724 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.724 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.724 [ 00:08:17.724 { 00:08:17.724 "name": "BaseBdev2", 00:08:17.724 "aliases": [ 00:08:17.724 "1b9a6ba4-296f-4132-a669-a901f1375424" 00:08:17.724 ], 00:08:17.724 "product_name": "Malloc disk", 00:08:17.724 "block_size": 512, 00:08:17.724 "num_blocks": 65536, 00:08:17.724 "uuid": "1b9a6ba4-296f-4132-a669-a901f1375424", 00:08:17.724 "assigned_rate_limits": { 00:08:17.724 "rw_ios_per_sec": 0, 00:08:17.724 "rw_mbytes_per_sec": 0, 00:08:17.724 "r_mbytes_per_sec": 0, 00:08:17.724 "w_mbytes_per_sec": 0 00:08:17.724 }, 00:08:17.724 "claimed": true, 00:08:17.724 "claim_type": "exclusive_write", 00:08:17.724 "zoned": false, 00:08:17.724 "supported_io_types": { 00:08:17.724 "read": true, 00:08:17.724 "write": true, 00:08:17.724 "unmap": true, 00:08:17.724 "flush": true, 00:08:17.724 "reset": true, 00:08:17.724 "nvme_admin": false, 00:08:17.724 "nvme_io": false, 00:08:17.724 "nvme_io_md": false, 00:08:17.724 "write_zeroes": true, 00:08:17.724 "zcopy": true, 00:08:17.724 "get_zone_info": false, 00:08:17.724 "zone_management": false, 00:08:17.724 "zone_append": false, 00:08:17.724 "compare": false, 00:08:17.724 "compare_and_write": false, 00:08:17.724 "abort": true, 00:08:17.724 "seek_hole": false, 00:08:17.724 "seek_data": false, 00:08:17.724 "copy": true, 00:08:17.724 "nvme_iov_md": false 00:08:17.724 }, 00:08:17.724 "memory_domains": [ 00:08:17.724 { 00:08:17.724 "dma_device_id": "system", 00:08:17.724 "dma_device_type": 1 00:08:17.724 }, 00:08:17.724 { 00:08:17.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.724 "dma_device_type": 2 00:08:17.724 } 00:08:17.724 ], 00:08:17.724 "driver_specific": {} 00:08:17.724 } 00:08:17.724 ] 00:08:17.724 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.724 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:17.724 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:17.724 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:17.724 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:17.725 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.725 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.725 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:17.725 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:17.725 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.725 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.725 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.725 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.725 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.725 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.725 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.725 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.725 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.725 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.725 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.725 "name": "Existed_Raid", 00:08:17.725 "uuid": "09aa311e-916c-4134-ab37-f162817d9758", 00:08:17.725 "strip_size_kb": 0, 00:08:17.725 "state": "online", 00:08:17.725 "raid_level": "raid1", 00:08:17.725 "superblock": true, 00:08:17.725 "num_base_bdevs": 2, 00:08:17.725 "num_base_bdevs_discovered": 2, 00:08:17.725 "num_base_bdevs_operational": 2, 00:08:17.725 "base_bdevs_list": [ 00:08:17.725 { 00:08:17.725 "name": "BaseBdev1", 00:08:17.725 "uuid": "2857b970-d2bb-4c12-bb79-5ed7a6179ca4", 00:08:17.725 "is_configured": true, 00:08:17.725 "data_offset": 2048, 00:08:17.725 "data_size": 63488 00:08:17.725 }, 00:08:17.725 { 00:08:17.725 "name": "BaseBdev2", 00:08:17.725 "uuid": "1b9a6ba4-296f-4132-a669-a901f1375424", 00:08:17.725 "is_configured": true, 00:08:17.725 "data_offset": 2048, 00:08:17.725 "data_size": 63488 00:08:17.725 } 00:08:17.725 ] 00:08:17.725 }' 00:08:17.725 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.725 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.292 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:18.292 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:18.292 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:18.292 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:18.292 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:18.292 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:18.292 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:18.292 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:18.292 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.292 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.292 [2024-11-15 09:27:06.522019] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:18.292 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.292 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:18.292 "name": "Existed_Raid", 00:08:18.292 "aliases": [ 00:08:18.292 "09aa311e-916c-4134-ab37-f162817d9758" 00:08:18.292 ], 00:08:18.292 "product_name": "Raid Volume", 00:08:18.292 "block_size": 512, 00:08:18.292 "num_blocks": 63488, 00:08:18.292 "uuid": "09aa311e-916c-4134-ab37-f162817d9758", 00:08:18.292 "assigned_rate_limits": { 00:08:18.292 "rw_ios_per_sec": 0, 00:08:18.292 "rw_mbytes_per_sec": 0, 00:08:18.292 "r_mbytes_per_sec": 0, 00:08:18.292 "w_mbytes_per_sec": 0 00:08:18.292 }, 00:08:18.292 "claimed": false, 00:08:18.292 "zoned": false, 00:08:18.292 "supported_io_types": { 00:08:18.292 "read": true, 00:08:18.292 "write": true, 00:08:18.292 "unmap": false, 00:08:18.292 "flush": false, 00:08:18.292 "reset": true, 00:08:18.292 "nvme_admin": false, 00:08:18.292 "nvme_io": false, 00:08:18.292 "nvme_io_md": false, 00:08:18.292 "write_zeroes": true, 00:08:18.292 "zcopy": false, 00:08:18.292 "get_zone_info": false, 00:08:18.292 "zone_management": false, 00:08:18.292 "zone_append": false, 00:08:18.292 "compare": false, 00:08:18.292 "compare_and_write": false, 00:08:18.292 "abort": false, 00:08:18.292 "seek_hole": false, 00:08:18.292 "seek_data": false, 00:08:18.292 "copy": false, 00:08:18.292 "nvme_iov_md": false 00:08:18.292 }, 00:08:18.292 "memory_domains": [ 00:08:18.292 { 00:08:18.292 "dma_device_id": "system", 00:08:18.292 "dma_device_type": 1 00:08:18.292 }, 00:08:18.292 { 00:08:18.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.292 "dma_device_type": 2 00:08:18.292 }, 00:08:18.292 { 00:08:18.292 "dma_device_id": "system", 00:08:18.292 "dma_device_type": 1 00:08:18.292 }, 00:08:18.292 { 00:08:18.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.292 "dma_device_type": 2 00:08:18.292 } 00:08:18.292 ], 00:08:18.292 "driver_specific": { 00:08:18.292 "raid": { 00:08:18.292 "uuid": "09aa311e-916c-4134-ab37-f162817d9758", 00:08:18.292 "strip_size_kb": 0, 00:08:18.292 "state": "online", 00:08:18.292 "raid_level": "raid1", 00:08:18.292 "superblock": true, 00:08:18.292 "num_base_bdevs": 2, 00:08:18.292 "num_base_bdevs_discovered": 2, 00:08:18.292 "num_base_bdevs_operational": 2, 00:08:18.292 "base_bdevs_list": [ 00:08:18.292 { 00:08:18.292 "name": "BaseBdev1", 00:08:18.292 "uuid": "2857b970-d2bb-4c12-bb79-5ed7a6179ca4", 00:08:18.292 "is_configured": true, 00:08:18.292 "data_offset": 2048, 00:08:18.292 "data_size": 63488 00:08:18.292 }, 00:08:18.292 { 00:08:18.292 "name": "BaseBdev2", 00:08:18.292 "uuid": "1b9a6ba4-296f-4132-a669-a901f1375424", 00:08:18.292 "is_configured": true, 00:08:18.292 "data_offset": 2048, 00:08:18.292 "data_size": 63488 00:08:18.292 } 00:08:18.292 ] 00:08:18.292 } 00:08:18.292 } 00:08:18.292 }' 00:08:18.292 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:18.293 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:18.293 BaseBdev2' 00:08:18.293 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.293 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:18.293 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.293 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:18.293 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.293 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.293 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.293 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.293 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.293 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.293 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.293 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:18.293 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.293 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.293 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.293 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.293 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.293 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.293 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:18.293 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.293 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.552 [2024-11-15 09:27:06.757389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:18.552 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.552 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:18.552 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:18.552 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:18.552 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:18.552 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:18.552 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:18.552 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.552 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.552 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:18.552 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:18.552 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:18.552 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.552 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.552 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.552 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.552 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.552 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.552 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.552 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.552 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.552 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.552 "name": "Existed_Raid", 00:08:18.552 "uuid": "09aa311e-916c-4134-ab37-f162817d9758", 00:08:18.552 "strip_size_kb": 0, 00:08:18.552 "state": "online", 00:08:18.552 "raid_level": "raid1", 00:08:18.552 "superblock": true, 00:08:18.552 "num_base_bdevs": 2, 00:08:18.552 "num_base_bdevs_discovered": 1, 00:08:18.552 "num_base_bdevs_operational": 1, 00:08:18.552 "base_bdevs_list": [ 00:08:18.552 { 00:08:18.552 "name": null, 00:08:18.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.552 "is_configured": false, 00:08:18.552 "data_offset": 0, 00:08:18.552 "data_size": 63488 00:08:18.552 }, 00:08:18.552 { 00:08:18.552 "name": "BaseBdev2", 00:08:18.552 "uuid": "1b9a6ba4-296f-4132-a669-a901f1375424", 00:08:18.552 "is_configured": true, 00:08:18.552 "data_offset": 2048, 00:08:18.552 "data_size": 63488 00:08:18.552 } 00:08:18.552 ] 00:08:18.552 }' 00:08:18.552 09:27:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.552 09:27:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.125 09:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:19.125 09:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:19.125 09:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.125 09:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:19.125 09:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.125 09:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.125 09:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.125 09:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:19.125 09:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:19.126 09:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:19.126 09:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.126 09:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.126 [2024-11-15 09:27:07.343167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:19.126 [2024-11-15 09:27:07.343310] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:19.126 [2024-11-15 09:27:07.455195] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:19.126 [2024-11-15 09:27:07.455276] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:19.126 [2024-11-15 09:27:07.455289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:19.126 09:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.126 09:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:19.126 09:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:19.126 09:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.126 09:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:19.126 09:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.126 09:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.126 09:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.126 09:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:19.126 09:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:19.126 09:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:19.126 09:27:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63234 00:08:19.126 09:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 63234 ']' 00:08:19.126 09:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 63234 00:08:19.126 09:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:08:19.126 09:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:19.126 09:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63234 00:08:19.126 09:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:19.126 killing process with pid 63234 00:08:19.126 09:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:19.126 09:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63234' 00:08:19.126 09:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 63234 00:08:19.126 [2024-11-15 09:27:07.548226] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:19.126 09:27:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 63234 00:08:19.126 [2024-11-15 09:27:07.566928] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:20.506 09:27:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:20.506 00:08:20.506 real 0m5.407s 00:08:20.506 user 0m7.590s 00:08:20.506 sys 0m0.982s 00:08:20.506 09:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:20.506 09:27:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.506 ************************************ 00:08:20.506 END TEST raid_state_function_test_sb 00:08:20.506 ************************************ 00:08:20.506 09:27:08 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:20.506 09:27:08 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:20.506 09:27:08 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:20.506 09:27:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:20.506 ************************************ 00:08:20.506 START TEST raid_superblock_test 00:08:20.506 ************************************ 00:08:20.507 09:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:08:20.507 09:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:20.507 09:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:20.507 09:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:20.507 09:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:20.507 09:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:20.507 09:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:20.507 09:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:20.507 09:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:20.507 09:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:20.507 09:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:20.507 09:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:20.507 09:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:20.507 09:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:20.507 09:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:20.507 09:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:20.507 09:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63491 00:08:20.507 09:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63491 00:08:20.507 09:27:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:20.507 09:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 63491 ']' 00:08:20.507 09:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.507 09:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:20.507 09:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.507 09:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:20.507 09:27:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.765 [2024-11-15 09:27:09.050835] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:08:20.765 [2024-11-15 09:27:09.051503] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63491 ] 00:08:20.765 [2024-11-15 09:27:09.207480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.024 [2024-11-15 09:27:09.350730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.283 [2024-11-15 09:27:09.602509] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.283 [2024-11-15 09:27:09.602667] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.542 malloc1 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.542 [2024-11-15 09:27:09.960184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:21.542 [2024-11-15 09:27:09.960377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.542 [2024-11-15 09:27:09.960433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:21.542 [2024-11-15 09:27:09.960474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.542 [2024-11-15 09:27:09.963331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.542 [2024-11-15 09:27:09.963428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:21.542 pt1 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.542 09:27:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.802 malloc2 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.802 [2024-11-15 09:27:10.033263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:21.802 [2024-11-15 09:27:10.033375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.802 [2024-11-15 09:27:10.033427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:21.802 [2024-11-15 09:27:10.033462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.802 [2024-11-15 09:27:10.036020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.802 [2024-11-15 09:27:10.036086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:21.802 pt2 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.802 [2024-11-15 09:27:10.045330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:21.802 [2024-11-15 09:27:10.047671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:21.802 [2024-11-15 09:27:10.047875] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:21.802 [2024-11-15 09:27:10.047894] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:21.802 [2024-11-15 09:27:10.048198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:21.802 [2024-11-15 09:27:10.048398] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:21.802 [2024-11-15 09:27:10.048421] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:21.802 [2024-11-15 09:27:10.048616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.802 "name": "raid_bdev1", 00:08:21.802 "uuid": "1e3007e1-75ca-4371-a6f4-aa36bbdfaed4", 00:08:21.802 "strip_size_kb": 0, 00:08:21.802 "state": "online", 00:08:21.802 "raid_level": "raid1", 00:08:21.802 "superblock": true, 00:08:21.802 "num_base_bdevs": 2, 00:08:21.802 "num_base_bdevs_discovered": 2, 00:08:21.802 "num_base_bdevs_operational": 2, 00:08:21.802 "base_bdevs_list": [ 00:08:21.802 { 00:08:21.802 "name": "pt1", 00:08:21.802 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:21.802 "is_configured": true, 00:08:21.802 "data_offset": 2048, 00:08:21.802 "data_size": 63488 00:08:21.802 }, 00:08:21.802 { 00:08:21.802 "name": "pt2", 00:08:21.802 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:21.802 "is_configured": true, 00:08:21.802 "data_offset": 2048, 00:08:21.802 "data_size": 63488 00:08:21.802 } 00:08:21.802 ] 00:08:21.802 }' 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.802 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.061 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:22.061 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:22.061 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:22.061 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:22.061 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:22.061 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:22.061 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:22.061 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.061 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.061 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:22.061 [2024-11-15 09:27:10.516839] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.319 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.319 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:22.319 "name": "raid_bdev1", 00:08:22.319 "aliases": [ 00:08:22.319 "1e3007e1-75ca-4371-a6f4-aa36bbdfaed4" 00:08:22.319 ], 00:08:22.320 "product_name": "Raid Volume", 00:08:22.320 "block_size": 512, 00:08:22.320 "num_blocks": 63488, 00:08:22.320 "uuid": "1e3007e1-75ca-4371-a6f4-aa36bbdfaed4", 00:08:22.320 "assigned_rate_limits": { 00:08:22.320 "rw_ios_per_sec": 0, 00:08:22.320 "rw_mbytes_per_sec": 0, 00:08:22.320 "r_mbytes_per_sec": 0, 00:08:22.320 "w_mbytes_per_sec": 0 00:08:22.320 }, 00:08:22.320 "claimed": false, 00:08:22.320 "zoned": false, 00:08:22.320 "supported_io_types": { 00:08:22.320 "read": true, 00:08:22.320 "write": true, 00:08:22.320 "unmap": false, 00:08:22.320 "flush": false, 00:08:22.320 "reset": true, 00:08:22.320 "nvme_admin": false, 00:08:22.320 "nvme_io": false, 00:08:22.320 "nvme_io_md": false, 00:08:22.320 "write_zeroes": true, 00:08:22.320 "zcopy": false, 00:08:22.320 "get_zone_info": false, 00:08:22.320 "zone_management": false, 00:08:22.320 "zone_append": false, 00:08:22.320 "compare": false, 00:08:22.320 "compare_and_write": false, 00:08:22.320 "abort": false, 00:08:22.320 "seek_hole": false, 00:08:22.320 "seek_data": false, 00:08:22.320 "copy": false, 00:08:22.320 "nvme_iov_md": false 00:08:22.320 }, 00:08:22.320 "memory_domains": [ 00:08:22.320 { 00:08:22.320 "dma_device_id": "system", 00:08:22.320 "dma_device_type": 1 00:08:22.320 }, 00:08:22.320 { 00:08:22.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.320 "dma_device_type": 2 00:08:22.320 }, 00:08:22.320 { 00:08:22.320 "dma_device_id": "system", 00:08:22.320 "dma_device_type": 1 00:08:22.320 }, 00:08:22.320 { 00:08:22.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.320 "dma_device_type": 2 00:08:22.320 } 00:08:22.320 ], 00:08:22.320 "driver_specific": { 00:08:22.320 "raid": { 00:08:22.320 "uuid": "1e3007e1-75ca-4371-a6f4-aa36bbdfaed4", 00:08:22.320 "strip_size_kb": 0, 00:08:22.320 "state": "online", 00:08:22.320 "raid_level": "raid1", 00:08:22.320 "superblock": true, 00:08:22.320 "num_base_bdevs": 2, 00:08:22.320 "num_base_bdevs_discovered": 2, 00:08:22.320 "num_base_bdevs_operational": 2, 00:08:22.320 "base_bdevs_list": [ 00:08:22.320 { 00:08:22.320 "name": "pt1", 00:08:22.320 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:22.320 "is_configured": true, 00:08:22.320 "data_offset": 2048, 00:08:22.320 "data_size": 63488 00:08:22.320 }, 00:08:22.320 { 00:08:22.320 "name": "pt2", 00:08:22.320 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:22.320 "is_configured": true, 00:08:22.320 "data_offset": 2048, 00:08:22.320 "data_size": 63488 00:08:22.320 } 00:08:22.320 ] 00:08:22.320 } 00:08:22.320 } 00:08:22.320 }' 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:22.320 pt2' 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:22.320 [2024-11-15 09:27:10.740489] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1e3007e1-75ca-4371-a6f4-aa36bbdfaed4 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1e3007e1-75ca-4371-a6f4-aa36bbdfaed4 ']' 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.320 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.580 [2024-11-15 09:27:10.788033] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:22.580 [2024-11-15 09:27:10.788078] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:22.580 [2024-11-15 09:27:10.788214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.580 [2024-11-15 09:27:10.788286] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:22.580 [2024-11-15 09:27:10.788300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.580 [2024-11-15 09:27:10.903907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:22.580 [2024-11-15 09:27:10.906204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:22.580 [2024-11-15 09:27:10.906318] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:22.580 [2024-11-15 09:27:10.906427] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:22.580 [2024-11-15 09:27:10.906481] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:22.580 [2024-11-15 09:27:10.906517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:22.580 request: 00:08:22.580 { 00:08:22.580 "name": "raid_bdev1", 00:08:22.580 "raid_level": "raid1", 00:08:22.580 "base_bdevs": [ 00:08:22.580 "malloc1", 00:08:22.580 "malloc2" 00:08:22.580 ], 00:08:22.580 "superblock": false, 00:08:22.580 "method": "bdev_raid_create", 00:08:22.580 "req_id": 1 00:08:22.580 } 00:08:22.580 Got JSON-RPC error response 00:08:22.580 response: 00:08:22.580 { 00:08:22.580 "code": -17, 00:08:22.580 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:22.580 } 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.580 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.580 [2024-11-15 09:27:10.971768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:22.580 [2024-11-15 09:27:10.971934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.580 [2024-11-15 09:27:10.971963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:22.581 [2024-11-15 09:27:10.971977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.581 [2024-11-15 09:27:10.974925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.581 [2024-11-15 09:27:10.975012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:22.581 [2024-11-15 09:27:10.975154] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:22.581 [2024-11-15 09:27:10.975275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:22.581 pt1 00:08:22.581 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.581 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:22.581 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.581 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.581 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:22.581 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:22.581 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.581 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.581 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.581 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.581 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.581 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.581 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.581 09:27:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.581 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.581 09:27:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.581 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.581 "name": "raid_bdev1", 00:08:22.581 "uuid": "1e3007e1-75ca-4371-a6f4-aa36bbdfaed4", 00:08:22.581 "strip_size_kb": 0, 00:08:22.581 "state": "configuring", 00:08:22.581 "raid_level": "raid1", 00:08:22.581 "superblock": true, 00:08:22.581 "num_base_bdevs": 2, 00:08:22.581 "num_base_bdevs_discovered": 1, 00:08:22.581 "num_base_bdevs_operational": 2, 00:08:22.581 "base_bdevs_list": [ 00:08:22.581 { 00:08:22.581 "name": "pt1", 00:08:22.581 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:22.581 "is_configured": true, 00:08:22.581 "data_offset": 2048, 00:08:22.581 "data_size": 63488 00:08:22.581 }, 00:08:22.581 { 00:08:22.581 "name": null, 00:08:22.581 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:22.581 "is_configured": false, 00:08:22.581 "data_offset": 2048, 00:08:22.581 "data_size": 63488 00:08:22.581 } 00:08:22.581 ] 00:08:22.581 }' 00:08:22.581 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.581 09:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.150 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:23.150 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:23.150 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:23.150 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:23.150 09:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.150 09:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.150 [2024-11-15 09:27:11.466956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:23.150 [2024-11-15 09:27:11.467122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.151 [2024-11-15 09:27:11.467171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:23.151 [2024-11-15 09:27:11.467223] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.151 [2024-11-15 09:27:11.467867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.151 [2024-11-15 09:27:11.467944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:23.151 [2024-11-15 09:27:11.468087] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:23.151 [2024-11-15 09:27:11.468164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:23.151 [2024-11-15 09:27:11.468369] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:23.151 [2024-11-15 09:27:11.468432] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:23.151 [2024-11-15 09:27:11.468775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:23.151 [2024-11-15 09:27:11.469035] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:23.151 [2024-11-15 09:27:11.469089] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:23.151 [2024-11-15 09:27:11.469334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.151 pt2 00:08:23.151 09:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.151 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:23.151 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:23.151 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:23.151 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.151 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.151 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:23.151 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:23.151 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.151 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.151 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.151 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.151 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.151 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.151 09:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.151 09:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.151 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.151 09:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.151 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.151 "name": "raid_bdev1", 00:08:23.151 "uuid": "1e3007e1-75ca-4371-a6f4-aa36bbdfaed4", 00:08:23.151 "strip_size_kb": 0, 00:08:23.151 "state": "online", 00:08:23.151 "raid_level": "raid1", 00:08:23.151 "superblock": true, 00:08:23.151 "num_base_bdevs": 2, 00:08:23.151 "num_base_bdevs_discovered": 2, 00:08:23.151 "num_base_bdevs_operational": 2, 00:08:23.151 "base_bdevs_list": [ 00:08:23.151 { 00:08:23.151 "name": "pt1", 00:08:23.151 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:23.151 "is_configured": true, 00:08:23.151 "data_offset": 2048, 00:08:23.151 "data_size": 63488 00:08:23.151 }, 00:08:23.151 { 00:08:23.151 "name": "pt2", 00:08:23.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:23.151 "is_configured": true, 00:08:23.151 "data_offset": 2048, 00:08:23.151 "data_size": 63488 00:08:23.151 } 00:08:23.151 ] 00:08:23.151 }' 00:08:23.151 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.151 09:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.719 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:23.719 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:23.719 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:23.719 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:23.719 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:23.719 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:23.719 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:23.719 09:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.719 09:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.719 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:23.719 [2024-11-15 09:27:11.954390] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:23.719 09:27:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.719 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:23.719 "name": "raid_bdev1", 00:08:23.719 "aliases": [ 00:08:23.719 "1e3007e1-75ca-4371-a6f4-aa36bbdfaed4" 00:08:23.719 ], 00:08:23.719 "product_name": "Raid Volume", 00:08:23.719 "block_size": 512, 00:08:23.719 "num_blocks": 63488, 00:08:23.719 "uuid": "1e3007e1-75ca-4371-a6f4-aa36bbdfaed4", 00:08:23.719 "assigned_rate_limits": { 00:08:23.719 "rw_ios_per_sec": 0, 00:08:23.719 "rw_mbytes_per_sec": 0, 00:08:23.719 "r_mbytes_per_sec": 0, 00:08:23.719 "w_mbytes_per_sec": 0 00:08:23.719 }, 00:08:23.719 "claimed": false, 00:08:23.720 "zoned": false, 00:08:23.720 "supported_io_types": { 00:08:23.720 "read": true, 00:08:23.720 "write": true, 00:08:23.720 "unmap": false, 00:08:23.720 "flush": false, 00:08:23.720 "reset": true, 00:08:23.720 "nvme_admin": false, 00:08:23.720 "nvme_io": false, 00:08:23.720 "nvme_io_md": false, 00:08:23.720 "write_zeroes": true, 00:08:23.720 "zcopy": false, 00:08:23.720 "get_zone_info": false, 00:08:23.720 "zone_management": false, 00:08:23.720 "zone_append": false, 00:08:23.720 "compare": false, 00:08:23.720 "compare_and_write": false, 00:08:23.720 "abort": false, 00:08:23.720 "seek_hole": false, 00:08:23.720 "seek_data": false, 00:08:23.720 "copy": false, 00:08:23.720 "nvme_iov_md": false 00:08:23.720 }, 00:08:23.720 "memory_domains": [ 00:08:23.720 { 00:08:23.720 "dma_device_id": "system", 00:08:23.720 "dma_device_type": 1 00:08:23.720 }, 00:08:23.720 { 00:08:23.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.720 "dma_device_type": 2 00:08:23.720 }, 00:08:23.720 { 00:08:23.720 "dma_device_id": "system", 00:08:23.720 "dma_device_type": 1 00:08:23.720 }, 00:08:23.720 { 00:08:23.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.720 "dma_device_type": 2 00:08:23.720 } 00:08:23.720 ], 00:08:23.720 "driver_specific": { 00:08:23.720 "raid": { 00:08:23.720 "uuid": "1e3007e1-75ca-4371-a6f4-aa36bbdfaed4", 00:08:23.720 "strip_size_kb": 0, 00:08:23.720 "state": "online", 00:08:23.720 "raid_level": "raid1", 00:08:23.720 "superblock": true, 00:08:23.720 "num_base_bdevs": 2, 00:08:23.720 "num_base_bdevs_discovered": 2, 00:08:23.720 "num_base_bdevs_operational": 2, 00:08:23.720 "base_bdevs_list": [ 00:08:23.720 { 00:08:23.720 "name": "pt1", 00:08:23.720 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:23.720 "is_configured": true, 00:08:23.720 "data_offset": 2048, 00:08:23.720 "data_size": 63488 00:08:23.720 }, 00:08:23.720 { 00:08:23.720 "name": "pt2", 00:08:23.720 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:23.720 "is_configured": true, 00:08:23.720 "data_offset": 2048, 00:08:23.720 "data_size": 63488 00:08:23.720 } 00:08:23.720 ] 00:08:23.720 } 00:08:23.720 } 00:08:23.720 }' 00:08:23.720 09:27:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:23.720 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:23.720 pt2' 00:08:23.720 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.720 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:23.720 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.720 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:23.720 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.720 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.720 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.720 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.720 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.720 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.720 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.720 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:23.720 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.720 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.720 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.720 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:23.979 [2024-11-15 09:27:12.197981] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1e3007e1-75ca-4371-a6f4-aa36bbdfaed4 '!=' 1e3007e1-75ca-4371-a6f4-aa36bbdfaed4 ']' 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.979 [2024-11-15 09:27:12.245707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.979 "name": "raid_bdev1", 00:08:23.979 "uuid": "1e3007e1-75ca-4371-a6f4-aa36bbdfaed4", 00:08:23.979 "strip_size_kb": 0, 00:08:23.979 "state": "online", 00:08:23.979 "raid_level": "raid1", 00:08:23.979 "superblock": true, 00:08:23.979 "num_base_bdevs": 2, 00:08:23.979 "num_base_bdevs_discovered": 1, 00:08:23.979 "num_base_bdevs_operational": 1, 00:08:23.979 "base_bdevs_list": [ 00:08:23.979 { 00:08:23.979 "name": null, 00:08:23.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.979 "is_configured": false, 00:08:23.979 "data_offset": 0, 00:08:23.979 "data_size": 63488 00:08:23.979 }, 00:08:23.979 { 00:08:23.979 "name": "pt2", 00:08:23.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:23.979 "is_configured": true, 00:08:23.979 "data_offset": 2048, 00:08:23.979 "data_size": 63488 00:08:23.979 } 00:08:23.979 ] 00:08:23.979 }' 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.979 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.239 [2024-11-15 09:27:12.636991] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:24.239 [2024-11-15 09:27:12.637039] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.239 [2024-11-15 09:27:12.637155] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.239 [2024-11-15 09:27:12.637216] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.239 [2024-11-15 09:27:12.637231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.239 [2024-11-15 09:27:12.696895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:24.239 [2024-11-15 09:27:12.697001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.239 [2024-11-15 09:27:12.697026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:24.239 [2024-11-15 09:27:12.697040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.239 [2024-11-15 09:27:12.699947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.239 [2024-11-15 09:27:12.700054] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:24.239 [2024-11-15 09:27:12.700209] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:24.239 [2024-11-15 09:27:12.700272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:24.239 [2024-11-15 09:27:12.700430] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:24.239 [2024-11-15 09:27:12.700446] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:24.239 [2024-11-15 09:27:12.700752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:24.239 [2024-11-15 09:27:12.700976] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:24.239 [2024-11-15 09:27:12.700989] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:24.239 [2024-11-15 09:27:12.701253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.239 pt2 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.239 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.502 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.502 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.502 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.502 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.502 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.502 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.502 "name": "raid_bdev1", 00:08:24.502 "uuid": "1e3007e1-75ca-4371-a6f4-aa36bbdfaed4", 00:08:24.502 "strip_size_kb": 0, 00:08:24.502 "state": "online", 00:08:24.502 "raid_level": "raid1", 00:08:24.502 "superblock": true, 00:08:24.502 "num_base_bdevs": 2, 00:08:24.502 "num_base_bdevs_discovered": 1, 00:08:24.502 "num_base_bdevs_operational": 1, 00:08:24.502 "base_bdevs_list": [ 00:08:24.502 { 00:08:24.502 "name": null, 00:08:24.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.502 "is_configured": false, 00:08:24.502 "data_offset": 2048, 00:08:24.502 "data_size": 63488 00:08:24.502 }, 00:08:24.502 { 00:08:24.502 "name": "pt2", 00:08:24.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:24.502 "is_configured": true, 00:08:24.502 "data_offset": 2048, 00:08:24.502 "data_size": 63488 00:08:24.502 } 00:08:24.502 ] 00:08:24.502 }' 00:08:24.502 09:27:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.502 09:27:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.761 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:24.761 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.761 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.761 [2024-11-15 09:27:13.128521] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:24.761 [2024-11-15 09:27:13.128629] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.761 [2024-11-15 09:27:13.128745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.761 [2024-11-15 09:27:13.128813] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.761 [2024-11-15 09:27:13.128826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:24.761 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.761 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.761 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:24.761 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.761 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.761 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.761 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:24.761 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:24.761 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:24.761 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:24.761 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.761 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.761 [2024-11-15 09:27:13.188464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:24.761 [2024-11-15 09:27:13.188605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.761 [2024-11-15 09:27:13.188660] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:24.761 [2024-11-15 09:27:13.188698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.761 [2024-11-15 09:27:13.191656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.761 [2024-11-15 09:27:13.191741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:24.761 [2024-11-15 09:27:13.191924] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:24.761 [2024-11-15 09:27:13.192025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:24.761 [2024-11-15 09:27:13.192246] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:24.761 [2024-11-15 09:27:13.192309] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:24.761 [2024-11-15 09:27:13.192355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:24.761 [2024-11-15 09:27:13.192476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:24.761 [2024-11-15 09:27:13.192614] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:24.761 [2024-11-15 09:27:13.192656] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:24.761 [2024-11-15 09:27:13.193015] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:24.761 pt1 00:08:24.761 [2024-11-15 09:27:13.193236] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:24.762 [2024-11-15 09:27:13.193256] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:24.762 [2024-11-15 09:27:13.193495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.762 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.762 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:24.762 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:24.762 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.762 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.762 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:24.762 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:24.762 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:24.762 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.762 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.762 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.762 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.762 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.762 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.762 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.762 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.762 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.021 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.021 "name": "raid_bdev1", 00:08:25.021 "uuid": "1e3007e1-75ca-4371-a6f4-aa36bbdfaed4", 00:08:25.021 "strip_size_kb": 0, 00:08:25.021 "state": "online", 00:08:25.021 "raid_level": "raid1", 00:08:25.021 "superblock": true, 00:08:25.021 "num_base_bdevs": 2, 00:08:25.021 "num_base_bdevs_discovered": 1, 00:08:25.021 "num_base_bdevs_operational": 1, 00:08:25.021 "base_bdevs_list": [ 00:08:25.021 { 00:08:25.021 "name": null, 00:08:25.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.021 "is_configured": false, 00:08:25.021 "data_offset": 2048, 00:08:25.021 "data_size": 63488 00:08:25.021 }, 00:08:25.021 { 00:08:25.021 "name": "pt2", 00:08:25.021 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:25.021 "is_configured": true, 00:08:25.021 "data_offset": 2048, 00:08:25.021 "data_size": 63488 00:08:25.021 } 00:08:25.021 ] 00:08:25.021 }' 00:08:25.021 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.021 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.280 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:25.280 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:25.280 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.280 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.280 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.280 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:25.280 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:25.280 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.281 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.281 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:25.281 [2024-11-15 09:27:13.632341] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.281 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.281 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 1e3007e1-75ca-4371-a6f4-aa36bbdfaed4 '!=' 1e3007e1-75ca-4371-a6f4-aa36bbdfaed4 ']' 00:08:25.281 09:27:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63491 00:08:25.281 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 63491 ']' 00:08:25.281 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 63491 00:08:25.281 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:08:25.281 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:25.281 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63491 00:08:25.281 killing process with pid 63491 00:08:25.281 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:25.281 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:25.281 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63491' 00:08:25.281 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 63491 00:08:25.281 09:27:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 63491 00:08:25.281 [2024-11-15 09:27:13.712368] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:25.281 [2024-11-15 09:27:13.712517] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.281 [2024-11-15 09:27:13.712630] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:25.281 [2024-11-15 09:27:13.712653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:25.540 [2024-11-15 09:27:13.970994] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:26.917 ************************************ 00:08:26.917 END TEST raid_superblock_test 00:08:26.917 ************************************ 00:08:26.917 09:27:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:26.917 00:08:26.917 real 0m6.353s 00:08:26.917 user 0m9.379s 00:08:26.917 sys 0m1.066s 00:08:26.917 09:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:26.917 09:27:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.917 09:27:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:26.917 09:27:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:26.917 09:27:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:26.917 09:27:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:26.917 ************************************ 00:08:26.917 START TEST raid_read_error_test 00:08:26.917 ************************************ 00:08:26.917 09:27:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 read 00:08:26.917 09:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:26.917 09:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:26.917 09:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:26.917 09:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:26.917 09:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:26.917 09:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:26.918 09:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:26.918 09:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:26.918 09:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:26.918 09:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:26.918 09:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:26.918 09:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:26.918 09:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:26.918 09:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:26.918 09:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:26.918 09:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:26.918 09:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:26.918 09:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:26.918 09:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:26.918 09:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:26.918 09:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:27.178 09:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YZOYN8R25b 00:08:27.178 09:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63823 00:08:27.178 09:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63823 00:08:27.178 09:27:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 63823 ']' 00:08:27.178 09:27:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:27.178 09:27:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.178 09:27:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:27.178 09:27:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.178 09:27:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:27.178 09:27:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.178 [2024-11-15 09:27:15.512018] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:08:27.178 [2024-11-15 09:27:15.512390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63823 ] 00:08:27.437 [2024-11-15 09:27:15.698818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.437 [2024-11-15 09:27:15.821822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.695 [2024-11-15 09:27:16.038052] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.695 [2024-11-15 09:27:16.038130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.955 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:27.955 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:27.955 09:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:27.955 09:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:27.955 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.955 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.955 BaseBdev1_malloc 00:08:27.955 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.955 09:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:27.955 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.955 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.215 true 00:08:28.215 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.215 09:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:28.215 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.215 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.215 [2024-11-15 09:27:16.428524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:28.215 [2024-11-15 09:27:16.428605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.215 [2024-11-15 09:27:16.428632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:28.215 [2024-11-15 09:27:16.428645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.215 [2024-11-15 09:27:16.431149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.215 [2024-11-15 09:27:16.431199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:28.215 BaseBdev1 00:08:28.215 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.215 09:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:28.215 09:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:28.215 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.215 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.215 BaseBdev2_malloc 00:08:28.215 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.215 09:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:28.215 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.215 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.215 true 00:08:28.215 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.215 09:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:28.215 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.215 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.215 [2024-11-15 09:27:16.494137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:28.215 [2024-11-15 09:27:16.494213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.215 [2024-11-15 09:27:16.494234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:28.216 [2024-11-15 09:27:16.494246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.216 [2024-11-15 09:27:16.496477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.216 [2024-11-15 09:27:16.496644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:28.216 BaseBdev2 00:08:28.216 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.216 09:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:28.216 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.216 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.216 [2024-11-15 09:27:16.502166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.216 [2024-11-15 09:27:16.504034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:28.216 [2024-11-15 09:27:16.504317] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:28.216 [2024-11-15 09:27:16.504384] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:28.216 [2024-11-15 09:27:16.504716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:28.216 [2024-11-15 09:27:16.504976] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:28.216 [2024-11-15 09:27:16.505026] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:28.216 [2024-11-15 09:27:16.505258] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.216 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.216 09:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:28.216 09:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:28.216 09:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.216 09:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.216 09:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.216 09:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.216 09:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.216 09:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.216 09:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.216 09:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.216 09:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.216 09:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.216 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.216 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.216 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.216 09:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.216 "name": "raid_bdev1", 00:08:28.216 "uuid": "9877b592-dd91-4cfb-9758-833696c3389c", 00:08:28.216 "strip_size_kb": 0, 00:08:28.216 "state": "online", 00:08:28.216 "raid_level": "raid1", 00:08:28.216 "superblock": true, 00:08:28.216 "num_base_bdevs": 2, 00:08:28.216 "num_base_bdevs_discovered": 2, 00:08:28.216 "num_base_bdevs_operational": 2, 00:08:28.216 "base_bdevs_list": [ 00:08:28.216 { 00:08:28.216 "name": "BaseBdev1", 00:08:28.216 "uuid": "6622ea36-1d0c-58f4-9405-04617bfe214a", 00:08:28.216 "is_configured": true, 00:08:28.216 "data_offset": 2048, 00:08:28.216 "data_size": 63488 00:08:28.216 }, 00:08:28.216 { 00:08:28.216 "name": "BaseBdev2", 00:08:28.216 "uuid": "48da0d8b-05bd-5918-8e20-0878cc1621a3", 00:08:28.216 "is_configured": true, 00:08:28.216 "data_offset": 2048, 00:08:28.216 "data_size": 63488 00:08:28.216 } 00:08:28.216 ] 00:08:28.216 }' 00:08:28.216 09:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.216 09:27:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.475 09:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:28.475 09:27:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:28.733 [2024-11-15 09:27:17.034539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.671 "name": "raid_bdev1", 00:08:29.671 "uuid": "9877b592-dd91-4cfb-9758-833696c3389c", 00:08:29.671 "strip_size_kb": 0, 00:08:29.671 "state": "online", 00:08:29.671 "raid_level": "raid1", 00:08:29.671 "superblock": true, 00:08:29.671 "num_base_bdevs": 2, 00:08:29.671 "num_base_bdevs_discovered": 2, 00:08:29.671 "num_base_bdevs_operational": 2, 00:08:29.671 "base_bdevs_list": [ 00:08:29.671 { 00:08:29.671 "name": "BaseBdev1", 00:08:29.671 "uuid": "6622ea36-1d0c-58f4-9405-04617bfe214a", 00:08:29.671 "is_configured": true, 00:08:29.671 "data_offset": 2048, 00:08:29.671 "data_size": 63488 00:08:29.671 }, 00:08:29.671 { 00:08:29.671 "name": "BaseBdev2", 00:08:29.671 "uuid": "48da0d8b-05bd-5918-8e20-0878cc1621a3", 00:08:29.671 "is_configured": true, 00:08:29.671 "data_offset": 2048, 00:08:29.671 "data_size": 63488 00:08:29.671 } 00:08:29.671 ] 00:08:29.671 }' 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.671 09:27:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.931 09:27:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:29.931 09:27:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.931 09:27:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.931 [2024-11-15 09:27:18.390906] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:29.931 [2024-11-15 09:27:18.390957] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:29.931 [2024-11-15 09:27:18.394142] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:29.931 [2024-11-15 09:27:18.394246] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.931 [2024-11-15 09:27:18.394415] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:29.931 [2024-11-15 09:27:18.394508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:29.931 { 00:08:29.931 "results": [ 00:08:29.931 { 00:08:29.931 "job": "raid_bdev1", 00:08:29.931 "core_mask": "0x1", 00:08:29.931 "workload": "randrw", 00:08:29.931 "percentage": 50, 00:08:29.931 "status": "finished", 00:08:29.931 "queue_depth": 1, 00:08:29.931 "io_size": 131072, 00:08:29.931 "runtime": 1.357061, 00:08:29.931 "iops": 16526.891569354655, 00:08:29.931 "mibps": 2065.861446169332, 00:08:29.931 "io_failed": 0, 00:08:29.931 "io_timeout": 0, 00:08:29.931 "avg_latency_us": 57.696727499857865, 00:08:29.931 "min_latency_us": 24.370305676855896, 00:08:29.931 "max_latency_us": 1566.8541484716156 00:08:29.931 } 00:08:29.931 ], 00:08:29.931 "core_count": 1 00:08:29.931 } 00:08:29.931 09:27:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.191 09:27:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63823 00:08:30.191 09:27:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 63823 ']' 00:08:30.191 09:27:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 63823 00:08:30.191 09:27:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:08:30.191 09:27:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:30.191 09:27:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63823 00:08:30.191 09:27:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:30.191 09:27:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:30.191 09:27:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63823' 00:08:30.191 killing process with pid 63823 00:08:30.191 09:27:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 63823 00:08:30.191 [2024-11-15 09:27:18.439563] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:30.191 09:27:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 63823 00:08:30.191 [2024-11-15 09:27:18.590395] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:31.596 09:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YZOYN8R25b 00:08:31.596 09:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:31.596 09:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:31.596 09:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:31.596 09:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:31.596 09:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:31.596 09:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:31.596 09:27:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:31.596 00:08:31.596 real 0m4.509s 00:08:31.596 user 0m5.392s 00:08:31.596 sys 0m0.578s 00:08:31.596 09:27:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:31.596 09:27:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.596 ************************************ 00:08:31.596 END TEST raid_read_error_test 00:08:31.596 ************************************ 00:08:31.596 09:27:19 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:31.596 09:27:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:31.596 09:27:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:31.596 09:27:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:31.596 ************************************ 00:08:31.596 START TEST raid_write_error_test 00:08:31.596 ************************************ 00:08:31.596 09:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 write 00:08:31.596 09:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:31.596 09:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:31.596 09:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:31.596 09:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:31.596 09:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.596 09:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:31.596 09:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:31.596 09:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.596 09:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:31.596 09:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:31.596 09:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.596 09:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:31.596 09:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:31.596 09:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:31.596 09:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:31.596 09:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:31.596 09:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:31.596 09:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:31.597 09:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:31.597 09:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:31.597 09:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:31.597 09:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.omVmRC1FUa 00:08:31.597 09:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63968 00:08:31.597 09:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:31.597 09:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63968 00:08:31.597 09:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 63968 ']' 00:08:31.597 09:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.597 09:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:31.597 09:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.597 09:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:31.597 09:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.597 [2024-11-15 09:27:20.052247] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:08:31.597 [2024-11-15 09:27:20.052381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63968 ] 00:08:31.856 [2024-11-15 09:27:20.212859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.176 [2024-11-15 09:27:20.337588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.176 [2024-11-15 09:27:20.558209] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.176 [2024-11-15 09:27:20.558277] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.746 09:27:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:32.746 09:27:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:32.746 09:27:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:32.746 09:27:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:32.746 09:27:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.746 09:27:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.746 BaseBdev1_malloc 00:08:32.746 09:27:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.746 09:27:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:32.746 09:27:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.746 09:27:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.746 true 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.746 [2024-11-15 09:27:21.017263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:32.746 [2024-11-15 09:27:21.017335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.746 [2024-11-15 09:27:21.017359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:32.746 [2024-11-15 09:27:21.017373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.746 [2024-11-15 09:27:21.019734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.746 [2024-11-15 09:27:21.019782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:32.746 BaseBdev1 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.746 BaseBdev2_malloc 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.746 true 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.746 [2024-11-15 09:27:21.088978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:32.746 [2024-11-15 09:27:21.089179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.746 [2024-11-15 09:27:21.089209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:32.746 [2024-11-15 09:27:21.089224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.746 [2024-11-15 09:27:21.091692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.746 [2024-11-15 09:27:21.091742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:32.746 BaseBdev2 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.746 [2024-11-15 09:27:21.101042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.746 [2024-11-15 09:27:21.103109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:32.746 [2024-11-15 09:27:21.103342] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:32.746 [2024-11-15 09:27:21.103359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:32.746 [2024-11-15 09:27:21.103640] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:32.746 [2024-11-15 09:27:21.103829] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:32.746 [2024-11-15 09:27:21.103840] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:32.746 [2024-11-15 09:27:21.104039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.746 09:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.747 09:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.747 09:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.747 09:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.747 09:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.747 "name": "raid_bdev1", 00:08:32.747 "uuid": "597c8f56-28e3-4f3d-8c68-2e0f4b50f91f", 00:08:32.747 "strip_size_kb": 0, 00:08:32.747 "state": "online", 00:08:32.747 "raid_level": "raid1", 00:08:32.747 "superblock": true, 00:08:32.747 "num_base_bdevs": 2, 00:08:32.747 "num_base_bdevs_discovered": 2, 00:08:32.747 "num_base_bdevs_operational": 2, 00:08:32.747 "base_bdevs_list": [ 00:08:32.747 { 00:08:32.747 "name": "BaseBdev1", 00:08:32.747 "uuid": "17045e34-042d-5b80-82f5-62a3b256c790", 00:08:32.747 "is_configured": true, 00:08:32.747 "data_offset": 2048, 00:08:32.747 "data_size": 63488 00:08:32.747 }, 00:08:32.747 { 00:08:32.747 "name": "BaseBdev2", 00:08:32.747 "uuid": "6529b5ab-a503-544e-9e29-9c46fa8126f9", 00:08:32.747 "is_configured": true, 00:08:32.747 "data_offset": 2048, 00:08:32.747 "data_size": 63488 00:08:32.747 } 00:08:32.747 ] 00:08:32.747 }' 00:08:32.747 09:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.747 09:27:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.315 09:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:33.315 09:27:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:33.315 [2024-11-15 09:27:21.637770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.252 [2024-11-15 09:27:22.533948] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:34.252 [2024-11-15 09:27:22.534025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:34.252 [2024-11-15 09:27:22.534233] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.252 "name": "raid_bdev1", 00:08:34.252 "uuid": "597c8f56-28e3-4f3d-8c68-2e0f4b50f91f", 00:08:34.252 "strip_size_kb": 0, 00:08:34.252 "state": "online", 00:08:34.252 "raid_level": "raid1", 00:08:34.252 "superblock": true, 00:08:34.252 "num_base_bdevs": 2, 00:08:34.252 "num_base_bdevs_discovered": 1, 00:08:34.252 "num_base_bdevs_operational": 1, 00:08:34.252 "base_bdevs_list": [ 00:08:34.252 { 00:08:34.252 "name": null, 00:08:34.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.252 "is_configured": false, 00:08:34.252 "data_offset": 0, 00:08:34.252 "data_size": 63488 00:08:34.252 }, 00:08:34.252 { 00:08:34.252 "name": "BaseBdev2", 00:08:34.252 "uuid": "6529b5ab-a503-544e-9e29-9c46fa8126f9", 00:08:34.252 "is_configured": true, 00:08:34.252 "data_offset": 2048, 00:08:34.252 "data_size": 63488 00:08:34.252 } 00:08:34.252 ] 00:08:34.252 }' 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.252 09:27:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.822 09:27:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:34.822 09:27:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.822 09:27:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.822 [2024-11-15 09:27:23.035506] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:34.822 [2024-11-15 09:27:23.035617] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:34.822 [2024-11-15 09:27:23.038876] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:34.822 [2024-11-15 09:27:23.038967] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.822 [2024-11-15 09:27:23.039075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:34.822 [2024-11-15 09:27:23.039133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:34.822 09:27:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.822 { 00:08:34.822 "results": [ 00:08:34.822 { 00:08:34.822 "job": "raid_bdev1", 00:08:34.822 "core_mask": "0x1", 00:08:34.822 "workload": "randrw", 00:08:34.822 "percentage": 50, 00:08:34.822 "status": "finished", 00:08:34.822 "queue_depth": 1, 00:08:34.822 "io_size": 131072, 00:08:34.822 "runtime": 1.398424, 00:08:34.822 "iops": 18633.118424740995, 00:08:34.822 "mibps": 2329.1398030926243, 00:08:34.822 "io_failed": 0, 00:08:34.822 "io_timeout": 0, 00:08:34.822 "avg_latency_us": 50.69859282295633, 00:08:34.822 "min_latency_us": 23.36419213973799, 00:08:34.822 "max_latency_us": 1624.0908296943232 00:08:34.822 } 00:08:34.822 ], 00:08:34.822 "core_count": 1 00:08:34.822 } 00:08:34.822 09:27:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63968 00:08:34.822 09:27:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 63968 ']' 00:08:34.822 09:27:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 63968 00:08:34.822 09:27:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:08:34.822 09:27:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:34.822 09:27:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63968 00:08:34.822 killing process with pid 63968 00:08:34.822 09:27:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:34.822 09:27:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:34.822 09:27:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63968' 00:08:34.822 09:27:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 63968 00:08:34.822 [2024-11-15 09:27:23.080986] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:34.822 09:27:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 63968 00:08:34.822 [2024-11-15 09:27:23.237234] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:36.201 09:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.omVmRC1FUa 00:08:36.201 09:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:36.201 09:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:36.201 09:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:36.201 09:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:36.201 ************************************ 00:08:36.201 END TEST raid_write_error_test 00:08:36.201 ************************************ 00:08:36.201 09:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:36.201 09:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:36.201 09:27:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:36.201 00:08:36.201 real 0m4.578s 00:08:36.201 user 0m5.499s 00:08:36.201 sys 0m0.588s 00:08:36.201 09:27:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:36.201 09:27:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.201 09:27:24 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:36.201 09:27:24 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:36.201 09:27:24 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:36.201 09:27:24 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:36.201 09:27:24 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:36.201 09:27:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:36.201 ************************************ 00:08:36.201 START TEST raid_state_function_test 00:08:36.201 ************************************ 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 false 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64112 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64112' 00:08:36.201 Process raid pid: 64112 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64112 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 64112 ']' 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:36.201 09:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.460 [2024-11-15 09:27:24.691718] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:08:36.460 [2024-11-15 09:27:24.691922] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.460 [2024-11-15 09:27:24.868644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.774 [2024-11-15 09:27:24.993923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.774 [2024-11-15 09:27:25.213737] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.774 [2024-11-15 09:27:25.213786] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.355 09:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:37.355 09:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:08:37.355 09:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:37.355 09:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.355 09:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.355 [2024-11-15 09:27:25.630911] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:37.355 [2024-11-15 09:27:25.630973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:37.355 [2024-11-15 09:27:25.630985] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:37.355 [2024-11-15 09:27:25.630995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:37.355 [2024-11-15 09:27:25.631002] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:37.355 [2024-11-15 09:27:25.631012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:37.355 09:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.355 09:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.355 09:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.355 09:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.355 09:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.355 09:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.355 09:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.355 09:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.355 09:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.355 09:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.355 09:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.355 09:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.355 09:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.355 09:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.355 09:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.355 09:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.356 09:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.356 "name": "Existed_Raid", 00:08:37.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.356 "strip_size_kb": 64, 00:08:37.356 "state": "configuring", 00:08:37.356 "raid_level": "raid0", 00:08:37.356 "superblock": false, 00:08:37.356 "num_base_bdevs": 3, 00:08:37.356 "num_base_bdevs_discovered": 0, 00:08:37.356 "num_base_bdevs_operational": 3, 00:08:37.356 "base_bdevs_list": [ 00:08:37.356 { 00:08:37.356 "name": "BaseBdev1", 00:08:37.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.356 "is_configured": false, 00:08:37.356 "data_offset": 0, 00:08:37.356 "data_size": 0 00:08:37.356 }, 00:08:37.356 { 00:08:37.356 "name": "BaseBdev2", 00:08:37.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.356 "is_configured": false, 00:08:37.356 "data_offset": 0, 00:08:37.356 "data_size": 0 00:08:37.356 }, 00:08:37.356 { 00:08:37.356 "name": "BaseBdev3", 00:08:37.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.356 "is_configured": false, 00:08:37.356 "data_offset": 0, 00:08:37.356 "data_size": 0 00:08:37.356 } 00:08:37.356 ] 00:08:37.356 }' 00:08:37.356 09:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.356 09:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.614 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:37.874 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.874 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.874 [2024-11-15 09:27:26.082066] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:37.874 [2024-11-15 09:27:26.082200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:37.874 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.874 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:37.874 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.874 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.874 [2024-11-15 09:27:26.090032] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:37.874 [2024-11-15 09:27:26.090142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:37.874 [2024-11-15 09:27:26.090174] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:37.874 [2024-11-15 09:27:26.090201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:37.874 [2024-11-15 09:27:26.090222] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:37.874 [2024-11-15 09:27:26.090247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:37.874 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.874 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:37.874 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.874 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.874 [2024-11-15 09:27:26.135827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.874 BaseBdev1 00:08:37.874 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.874 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:37.874 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:37.874 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:37.874 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:37.874 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:37.874 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:37.874 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:37.874 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.874 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.874 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.874 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:37.874 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.874 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.874 [ 00:08:37.874 { 00:08:37.874 "name": "BaseBdev1", 00:08:37.874 "aliases": [ 00:08:37.874 "9fb8ebaa-7b9f-473e-a6db-c893fad2852b" 00:08:37.874 ], 00:08:37.874 "product_name": "Malloc disk", 00:08:37.874 "block_size": 512, 00:08:37.875 "num_blocks": 65536, 00:08:37.875 "uuid": "9fb8ebaa-7b9f-473e-a6db-c893fad2852b", 00:08:37.875 "assigned_rate_limits": { 00:08:37.875 "rw_ios_per_sec": 0, 00:08:37.875 "rw_mbytes_per_sec": 0, 00:08:37.875 "r_mbytes_per_sec": 0, 00:08:37.875 "w_mbytes_per_sec": 0 00:08:37.875 }, 00:08:37.875 "claimed": true, 00:08:37.875 "claim_type": "exclusive_write", 00:08:37.875 "zoned": false, 00:08:37.875 "supported_io_types": { 00:08:37.875 "read": true, 00:08:37.875 "write": true, 00:08:37.875 "unmap": true, 00:08:37.875 "flush": true, 00:08:37.875 "reset": true, 00:08:37.875 "nvme_admin": false, 00:08:37.875 "nvme_io": false, 00:08:37.875 "nvme_io_md": false, 00:08:37.875 "write_zeroes": true, 00:08:37.875 "zcopy": true, 00:08:37.875 "get_zone_info": false, 00:08:37.875 "zone_management": false, 00:08:37.875 "zone_append": false, 00:08:37.875 "compare": false, 00:08:37.875 "compare_and_write": false, 00:08:37.875 "abort": true, 00:08:37.875 "seek_hole": false, 00:08:37.875 "seek_data": false, 00:08:37.875 "copy": true, 00:08:37.875 "nvme_iov_md": false 00:08:37.875 }, 00:08:37.875 "memory_domains": [ 00:08:37.875 { 00:08:37.875 "dma_device_id": "system", 00:08:37.875 "dma_device_type": 1 00:08:37.875 }, 00:08:37.875 { 00:08:37.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.875 "dma_device_type": 2 00:08:37.875 } 00:08:37.875 ], 00:08:37.875 "driver_specific": {} 00:08:37.875 } 00:08:37.875 ] 00:08:37.875 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.875 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:37.875 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.875 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.875 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.875 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.875 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.875 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.875 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.875 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.875 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.875 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.875 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.875 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.875 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.875 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.875 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.875 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.875 "name": "Existed_Raid", 00:08:37.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.875 "strip_size_kb": 64, 00:08:37.875 "state": "configuring", 00:08:37.875 "raid_level": "raid0", 00:08:37.875 "superblock": false, 00:08:37.875 "num_base_bdevs": 3, 00:08:37.875 "num_base_bdevs_discovered": 1, 00:08:37.875 "num_base_bdevs_operational": 3, 00:08:37.875 "base_bdevs_list": [ 00:08:37.875 { 00:08:37.875 "name": "BaseBdev1", 00:08:37.875 "uuid": "9fb8ebaa-7b9f-473e-a6db-c893fad2852b", 00:08:37.875 "is_configured": true, 00:08:37.875 "data_offset": 0, 00:08:37.875 "data_size": 65536 00:08:37.875 }, 00:08:37.875 { 00:08:37.875 "name": "BaseBdev2", 00:08:37.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.875 "is_configured": false, 00:08:37.875 "data_offset": 0, 00:08:37.875 "data_size": 0 00:08:37.875 }, 00:08:37.875 { 00:08:37.875 "name": "BaseBdev3", 00:08:37.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.875 "is_configured": false, 00:08:37.875 "data_offset": 0, 00:08:37.875 "data_size": 0 00:08:37.875 } 00:08:37.875 ] 00:08:37.875 }' 00:08:37.875 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.875 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.444 [2024-11-15 09:27:26.670972] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:38.444 [2024-11-15 09:27:26.671151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.444 [2024-11-15 09:27:26.678987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:38.444 [2024-11-15 09:27:26.680954] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:38.444 [2024-11-15 09:27:26.680997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:38.444 [2024-11-15 09:27:26.681009] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:38.444 [2024-11-15 09:27:26.681019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.444 "name": "Existed_Raid", 00:08:38.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.444 "strip_size_kb": 64, 00:08:38.444 "state": "configuring", 00:08:38.444 "raid_level": "raid0", 00:08:38.444 "superblock": false, 00:08:38.444 "num_base_bdevs": 3, 00:08:38.444 "num_base_bdevs_discovered": 1, 00:08:38.444 "num_base_bdevs_operational": 3, 00:08:38.444 "base_bdevs_list": [ 00:08:38.444 { 00:08:38.444 "name": "BaseBdev1", 00:08:38.444 "uuid": "9fb8ebaa-7b9f-473e-a6db-c893fad2852b", 00:08:38.444 "is_configured": true, 00:08:38.444 "data_offset": 0, 00:08:38.444 "data_size": 65536 00:08:38.444 }, 00:08:38.444 { 00:08:38.444 "name": "BaseBdev2", 00:08:38.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.444 "is_configured": false, 00:08:38.444 "data_offset": 0, 00:08:38.444 "data_size": 0 00:08:38.444 }, 00:08:38.444 { 00:08:38.444 "name": "BaseBdev3", 00:08:38.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.444 "is_configured": false, 00:08:38.444 "data_offset": 0, 00:08:38.444 "data_size": 0 00:08:38.444 } 00:08:38.444 ] 00:08:38.444 }' 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.444 09:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.703 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:38.703 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.703 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.962 [2024-11-15 09:27:27.178663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:38.962 BaseBdev2 00:08:38.962 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.962 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:38.962 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:38.962 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:38.962 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:38.962 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:38.962 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:38.962 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:38.962 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.962 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.962 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.962 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:38.962 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.962 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.962 [ 00:08:38.962 { 00:08:38.962 "name": "BaseBdev2", 00:08:38.962 "aliases": [ 00:08:38.962 "a78e58aa-9cab-4f4e-84b6-a5bf5808f318" 00:08:38.962 ], 00:08:38.962 "product_name": "Malloc disk", 00:08:38.962 "block_size": 512, 00:08:38.962 "num_blocks": 65536, 00:08:38.962 "uuid": "a78e58aa-9cab-4f4e-84b6-a5bf5808f318", 00:08:38.962 "assigned_rate_limits": { 00:08:38.962 "rw_ios_per_sec": 0, 00:08:38.962 "rw_mbytes_per_sec": 0, 00:08:38.962 "r_mbytes_per_sec": 0, 00:08:38.962 "w_mbytes_per_sec": 0 00:08:38.962 }, 00:08:38.962 "claimed": true, 00:08:38.962 "claim_type": "exclusive_write", 00:08:38.962 "zoned": false, 00:08:38.962 "supported_io_types": { 00:08:38.962 "read": true, 00:08:38.962 "write": true, 00:08:38.962 "unmap": true, 00:08:38.962 "flush": true, 00:08:38.962 "reset": true, 00:08:38.962 "nvme_admin": false, 00:08:38.962 "nvme_io": false, 00:08:38.962 "nvme_io_md": false, 00:08:38.962 "write_zeroes": true, 00:08:38.962 "zcopy": true, 00:08:38.963 "get_zone_info": false, 00:08:38.963 "zone_management": false, 00:08:38.963 "zone_append": false, 00:08:38.963 "compare": false, 00:08:38.963 "compare_and_write": false, 00:08:38.963 "abort": true, 00:08:38.963 "seek_hole": false, 00:08:38.963 "seek_data": false, 00:08:38.963 "copy": true, 00:08:38.963 "nvme_iov_md": false 00:08:38.963 }, 00:08:38.963 "memory_domains": [ 00:08:38.963 { 00:08:38.963 "dma_device_id": "system", 00:08:38.963 "dma_device_type": 1 00:08:38.963 }, 00:08:38.963 { 00:08:38.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.963 "dma_device_type": 2 00:08:38.963 } 00:08:38.963 ], 00:08:38.963 "driver_specific": {} 00:08:38.963 } 00:08:38.963 ] 00:08:38.963 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.963 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:38.963 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:38.963 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:38.963 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:38.963 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.963 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.963 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.963 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.963 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.963 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.963 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.963 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.963 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.963 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.963 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.963 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.963 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.963 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.963 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.963 "name": "Existed_Raid", 00:08:38.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.963 "strip_size_kb": 64, 00:08:38.963 "state": "configuring", 00:08:38.963 "raid_level": "raid0", 00:08:38.963 "superblock": false, 00:08:38.963 "num_base_bdevs": 3, 00:08:38.963 "num_base_bdevs_discovered": 2, 00:08:38.963 "num_base_bdevs_operational": 3, 00:08:38.963 "base_bdevs_list": [ 00:08:38.963 { 00:08:38.963 "name": "BaseBdev1", 00:08:38.963 "uuid": "9fb8ebaa-7b9f-473e-a6db-c893fad2852b", 00:08:38.963 "is_configured": true, 00:08:38.963 "data_offset": 0, 00:08:38.963 "data_size": 65536 00:08:38.963 }, 00:08:38.963 { 00:08:38.963 "name": "BaseBdev2", 00:08:38.963 "uuid": "a78e58aa-9cab-4f4e-84b6-a5bf5808f318", 00:08:38.963 "is_configured": true, 00:08:38.963 "data_offset": 0, 00:08:38.963 "data_size": 65536 00:08:38.963 }, 00:08:38.963 { 00:08:38.963 "name": "BaseBdev3", 00:08:38.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.963 "is_configured": false, 00:08:38.963 "data_offset": 0, 00:08:38.963 "data_size": 0 00:08:38.963 } 00:08:38.963 ] 00:08:38.963 }' 00:08:38.963 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.963 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.222 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:39.222 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.222 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.483 [2024-11-15 09:27:27.709714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:39.483 [2024-11-15 09:27:27.709915] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:39.483 [2024-11-15 09:27:27.709959] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:39.483 [2024-11-15 09:27:27.710274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:39.483 [2024-11-15 09:27:27.710453] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:39.483 [2024-11-15 09:27:27.710462] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:39.483 [2024-11-15 09:27:27.710757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.483 BaseBdev3 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.483 [ 00:08:39.483 { 00:08:39.483 "name": "BaseBdev3", 00:08:39.483 "aliases": [ 00:08:39.483 "c30aebe8-919e-4a4b-b2bf-901bf7cc6e6d" 00:08:39.483 ], 00:08:39.483 "product_name": "Malloc disk", 00:08:39.483 "block_size": 512, 00:08:39.483 "num_blocks": 65536, 00:08:39.483 "uuid": "c30aebe8-919e-4a4b-b2bf-901bf7cc6e6d", 00:08:39.483 "assigned_rate_limits": { 00:08:39.483 "rw_ios_per_sec": 0, 00:08:39.483 "rw_mbytes_per_sec": 0, 00:08:39.483 "r_mbytes_per_sec": 0, 00:08:39.483 "w_mbytes_per_sec": 0 00:08:39.483 }, 00:08:39.483 "claimed": true, 00:08:39.483 "claim_type": "exclusive_write", 00:08:39.483 "zoned": false, 00:08:39.483 "supported_io_types": { 00:08:39.483 "read": true, 00:08:39.483 "write": true, 00:08:39.483 "unmap": true, 00:08:39.483 "flush": true, 00:08:39.483 "reset": true, 00:08:39.483 "nvme_admin": false, 00:08:39.483 "nvme_io": false, 00:08:39.483 "nvme_io_md": false, 00:08:39.483 "write_zeroes": true, 00:08:39.483 "zcopy": true, 00:08:39.483 "get_zone_info": false, 00:08:39.483 "zone_management": false, 00:08:39.483 "zone_append": false, 00:08:39.483 "compare": false, 00:08:39.483 "compare_and_write": false, 00:08:39.483 "abort": true, 00:08:39.483 "seek_hole": false, 00:08:39.483 "seek_data": false, 00:08:39.483 "copy": true, 00:08:39.483 "nvme_iov_md": false 00:08:39.483 }, 00:08:39.483 "memory_domains": [ 00:08:39.483 { 00:08:39.483 "dma_device_id": "system", 00:08:39.483 "dma_device_type": 1 00:08:39.483 }, 00:08:39.483 { 00:08:39.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.483 "dma_device_type": 2 00:08:39.483 } 00:08:39.483 ], 00:08:39.483 "driver_specific": {} 00:08:39.483 } 00:08:39.483 ] 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.483 "name": "Existed_Raid", 00:08:39.483 "uuid": "4956b1f5-0a08-4470-9e3c-6193ffd22ec6", 00:08:39.483 "strip_size_kb": 64, 00:08:39.483 "state": "online", 00:08:39.483 "raid_level": "raid0", 00:08:39.483 "superblock": false, 00:08:39.483 "num_base_bdevs": 3, 00:08:39.483 "num_base_bdevs_discovered": 3, 00:08:39.483 "num_base_bdevs_operational": 3, 00:08:39.483 "base_bdevs_list": [ 00:08:39.483 { 00:08:39.483 "name": "BaseBdev1", 00:08:39.483 "uuid": "9fb8ebaa-7b9f-473e-a6db-c893fad2852b", 00:08:39.483 "is_configured": true, 00:08:39.483 "data_offset": 0, 00:08:39.483 "data_size": 65536 00:08:39.483 }, 00:08:39.483 { 00:08:39.483 "name": "BaseBdev2", 00:08:39.483 "uuid": "a78e58aa-9cab-4f4e-84b6-a5bf5808f318", 00:08:39.483 "is_configured": true, 00:08:39.483 "data_offset": 0, 00:08:39.483 "data_size": 65536 00:08:39.483 }, 00:08:39.483 { 00:08:39.483 "name": "BaseBdev3", 00:08:39.483 "uuid": "c30aebe8-919e-4a4b-b2bf-901bf7cc6e6d", 00:08:39.483 "is_configured": true, 00:08:39.483 "data_offset": 0, 00:08:39.483 "data_size": 65536 00:08:39.483 } 00:08:39.483 ] 00:08:39.483 }' 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.483 09:27:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.742 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:39.742 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:39.742 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:39.742 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:39.742 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:39.742 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:39.742 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:39.742 09:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.742 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:39.742 09:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.742 [2024-11-15 09:27:28.197366] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.001 09:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.001 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:40.001 "name": "Existed_Raid", 00:08:40.001 "aliases": [ 00:08:40.001 "4956b1f5-0a08-4470-9e3c-6193ffd22ec6" 00:08:40.001 ], 00:08:40.001 "product_name": "Raid Volume", 00:08:40.001 "block_size": 512, 00:08:40.001 "num_blocks": 196608, 00:08:40.001 "uuid": "4956b1f5-0a08-4470-9e3c-6193ffd22ec6", 00:08:40.001 "assigned_rate_limits": { 00:08:40.001 "rw_ios_per_sec": 0, 00:08:40.001 "rw_mbytes_per_sec": 0, 00:08:40.001 "r_mbytes_per_sec": 0, 00:08:40.001 "w_mbytes_per_sec": 0 00:08:40.001 }, 00:08:40.001 "claimed": false, 00:08:40.001 "zoned": false, 00:08:40.001 "supported_io_types": { 00:08:40.001 "read": true, 00:08:40.001 "write": true, 00:08:40.001 "unmap": true, 00:08:40.001 "flush": true, 00:08:40.001 "reset": true, 00:08:40.001 "nvme_admin": false, 00:08:40.001 "nvme_io": false, 00:08:40.001 "nvme_io_md": false, 00:08:40.001 "write_zeroes": true, 00:08:40.001 "zcopy": false, 00:08:40.001 "get_zone_info": false, 00:08:40.001 "zone_management": false, 00:08:40.001 "zone_append": false, 00:08:40.001 "compare": false, 00:08:40.001 "compare_and_write": false, 00:08:40.001 "abort": false, 00:08:40.001 "seek_hole": false, 00:08:40.001 "seek_data": false, 00:08:40.001 "copy": false, 00:08:40.001 "nvme_iov_md": false 00:08:40.001 }, 00:08:40.001 "memory_domains": [ 00:08:40.001 { 00:08:40.001 "dma_device_id": "system", 00:08:40.001 "dma_device_type": 1 00:08:40.001 }, 00:08:40.001 { 00:08:40.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.001 "dma_device_type": 2 00:08:40.001 }, 00:08:40.001 { 00:08:40.001 "dma_device_id": "system", 00:08:40.001 "dma_device_type": 1 00:08:40.001 }, 00:08:40.001 { 00:08:40.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.001 "dma_device_type": 2 00:08:40.001 }, 00:08:40.001 { 00:08:40.001 "dma_device_id": "system", 00:08:40.001 "dma_device_type": 1 00:08:40.001 }, 00:08:40.001 { 00:08:40.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.001 "dma_device_type": 2 00:08:40.001 } 00:08:40.001 ], 00:08:40.001 "driver_specific": { 00:08:40.001 "raid": { 00:08:40.001 "uuid": "4956b1f5-0a08-4470-9e3c-6193ffd22ec6", 00:08:40.001 "strip_size_kb": 64, 00:08:40.001 "state": "online", 00:08:40.001 "raid_level": "raid0", 00:08:40.001 "superblock": false, 00:08:40.001 "num_base_bdevs": 3, 00:08:40.001 "num_base_bdevs_discovered": 3, 00:08:40.001 "num_base_bdevs_operational": 3, 00:08:40.001 "base_bdevs_list": [ 00:08:40.001 { 00:08:40.001 "name": "BaseBdev1", 00:08:40.001 "uuid": "9fb8ebaa-7b9f-473e-a6db-c893fad2852b", 00:08:40.001 "is_configured": true, 00:08:40.001 "data_offset": 0, 00:08:40.001 "data_size": 65536 00:08:40.001 }, 00:08:40.001 { 00:08:40.001 "name": "BaseBdev2", 00:08:40.001 "uuid": "a78e58aa-9cab-4f4e-84b6-a5bf5808f318", 00:08:40.001 "is_configured": true, 00:08:40.001 "data_offset": 0, 00:08:40.001 "data_size": 65536 00:08:40.001 }, 00:08:40.001 { 00:08:40.001 "name": "BaseBdev3", 00:08:40.002 "uuid": "c30aebe8-919e-4a4b-b2bf-901bf7cc6e6d", 00:08:40.002 "is_configured": true, 00:08:40.002 "data_offset": 0, 00:08:40.002 "data_size": 65536 00:08:40.002 } 00:08:40.002 ] 00:08:40.002 } 00:08:40.002 } 00:08:40.002 }' 00:08:40.002 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:40.002 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:40.002 BaseBdev2 00:08:40.002 BaseBdev3' 00:08:40.002 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.002 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:40.002 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.002 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:40.002 09:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.002 09:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.002 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.002 09:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.002 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.002 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.002 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.002 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.002 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:40.002 09:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.002 09:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.002 09:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.002 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.002 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.002 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.002 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:40.002 09:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.002 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.002 09:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.261 [2024-11-15 09:27:28.512573] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:40.261 [2024-11-15 09:27:28.512611] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:40.261 [2024-11-15 09:27:28.512674] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.261 "name": "Existed_Raid", 00:08:40.261 "uuid": "4956b1f5-0a08-4470-9e3c-6193ffd22ec6", 00:08:40.261 "strip_size_kb": 64, 00:08:40.261 "state": "offline", 00:08:40.261 "raid_level": "raid0", 00:08:40.261 "superblock": false, 00:08:40.261 "num_base_bdevs": 3, 00:08:40.261 "num_base_bdevs_discovered": 2, 00:08:40.261 "num_base_bdevs_operational": 2, 00:08:40.261 "base_bdevs_list": [ 00:08:40.261 { 00:08:40.261 "name": null, 00:08:40.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.261 "is_configured": false, 00:08:40.261 "data_offset": 0, 00:08:40.261 "data_size": 65536 00:08:40.261 }, 00:08:40.261 { 00:08:40.261 "name": "BaseBdev2", 00:08:40.261 "uuid": "a78e58aa-9cab-4f4e-84b6-a5bf5808f318", 00:08:40.261 "is_configured": true, 00:08:40.261 "data_offset": 0, 00:08:40.261 "data_size": 65536 00:08:40.261 }, 00:08:40.261 { 00:08:40.261 "name": "BaseBdev3", 00:08:40.261 "uuid": "c30aebe8-919e-4a4b-b2bf-901bf7cc6e6d", 00:08:40.261 "is_configured": true, 00:08:40.261 "data_offset": 0, 00:08:40.261 "data_size": 65536 00:08:40.261 } 00:08:40.261 ] 00:08:40.261 }' 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.261 09:27:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.844 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:40.844 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:40.844 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:40.844 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.844 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.844 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.844 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.844 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:40.844 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:40.844 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:40.844 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.844 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.844 [2024-11-15 09:27:29.104084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:40.844 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.844 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:40.844 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:40.844 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:40.844 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.844 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.845 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.845 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.845 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:40.845 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:40.845 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:40.845 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.845 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.845 [2024-11-15 09:27:29.263264] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:40.845 [2024-11-15 09:27:29.263430] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:41.104 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.104 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:41.104 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:41.104 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.104 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.104 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:41.104 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.104 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.104 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:41.104 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:41.104 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:41.104 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:41.104 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:41.104 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:41.104 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.104 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.104 BaseBdev2 00:08:41.104 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.104 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:41.104 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.105 [ 00:08:41.105 { 00:08:41.105 "name": "BaseBdev2", 00:08:41.105 "aliases": [ 00:08:41.105 "8036f9f0-80ab-4b19-8a9a-5f4a94776543" 00:08:41.105 ], 00:08:41.105 "product_name": "Malloc disk", 00:08:41.105 "block_size": 512, 00:08:41.105 "num_blocks": 65536, 00:08:41.105 "uuid": "8036f9f0-80ab-4b19-8a9a-5f4a94776543", 00:08:41.105 "assigned_rate_limits": { 00:08:41.105 "rw_ios_per_sec": 0, 00:08:41.105 "rw_mbytes_per_sec": 0, 00:08:41.105 "r_mbytes_per_sec": 0, 00:08:41.105 "w_mbytes_per_sec": 0 00:08:41.105 }, 00:08:41.105 "claimed": false, 00:08:41.105 "zoned": false, 00:08:41.105 "supported_io_types": { 00:08:41.105 "read": true, 00:08:41.105 "write": true, 00:08:41.105 "unmap": true, 00:08:41.105 "flush": true, 00:08:41.105 "reset": true, 00:08:41.105 "nvme_admin": false, 00:08:41.105 "nvme_io": false, 00:08:41.105 "nvme_io_md": false, 00:08:41.105 "write_zeroes": true, 00:08:41.105 "zcopy": true, 00:08:41.105 "get_zone_info": false, 00:08:41.105 "zone_management": false, 00:08:41.105 "zone_append": false, 00:08:41.105 "compare": false, 00:08:41.105 "compare_and_write": false, 00:08:41.105 "abort": true, 00:08:41.105 "seek_hole": false, 00:08:41.105 "seek_data": false, 00:08:41.105 "copy": true, 00:08:41.105 "nvme_iov_md": false 00:08:41.105 }, 00:08:41.105 "memory_domains": [ 00:08:41.105 { 00:08:41.105 "dma_device_id": "system", 00:08:41.105 "dma_device_type": 1 00:08:41.105 }, 00:08:41.105 { 00:08:41.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.105 "dma_device_type": 2 00:08:41.105 } 00:08:41.105 ], 00:08:41.105 "driver_specific": {} 00:08:41.105 } 00:08:41.105 ] 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.105 BaseBdev3 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.105 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.365 [ 00:08:41.365 { 00:08:41.365 "name": "BaseBdev3", 00:08:41.365 "aliases": [ 00:08:41.365 "6f552363-6779-44ab-be35-89b667534d42" 00:08:41.365 ], 00:08:41.365 "product_name": "Malloc disk", 00:08:41.365 "block_size": 512, 00:08:41.365 "num_blocks": 65536, 00:08:41.365 "uuid": "6f552363-6779-44ab-be35-89b667534d42", 00:08:41.365 "assigned_rate_limits": { 00:08:41.365 "rw_ios_per_sec": 0, 00:08:41.365 "rw_mbytes_per_sec": 0, 00:08:41.365 "r_mbytes_per_sec": 0, 00:08:41.365 "w_mbytes_per_sec": 0 00:08:41.365 }, 00:08:41.365 "claimed": false, 00:08:41.365 "zoned": false, 00:08:41.365 "supported_io_types": { 00:08:41.365 "read": true, 00:08:41.365 "write": true, 00:08:41.365 "unmap": true, 00:08:41.365 "flush": true, 00:08:41.365 "reset": true, 00:08:41.365 "nvme_admin": false, 00:08:41.365 "nvme_io": false, 00:08:41.365 "nvme_io_md": false, 00:08:41.365 "write_zeroes": true, 00:08:41.365 "zcopy": true, 00:08:41.365 "get_zone_info": false, 00:08:41.365 "zone_management": false, 00:08:41.365 "zone_append": false, 00:08:41.365 "compare": false, 00:08:41.365 "compare_and_write": false, 00:08:41.365 "abort": true, 00:08:41.365 "seek_hole": false, 00:08:41.365 "seek_data": false, 00:08:41.365 "copy": true, 00:08:41.365 "nvme_iov_md": false 00:08:41.365 }, 00:08:41.365 "memory_domains": [ 00:08:41.365 { 00:08:41.365 "dma_device_id": "system", 00:08:41.365 "dma_device_type": 1 00:08:41.365 }, 00:08:41.365 { 00:08:41.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.365 "dma_device_type": 2 00:08:41.365 } 00:08:41.365 ], 00:08:41.365 "driver_specific": {} 00:08:41.365 } 00:08:41.365 ] 00:08:41.365 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.365 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:41.365 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:41.365 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:41.365 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:41.365 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.365 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.365 [2024-11-15 09:27:29.590251] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:41.365 [2024-11-15 09:27:29.590405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:41.365 [2024-11-15 09:27:29.590468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:41.365 [2024-11-15 09:27:29.592417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:41.365 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.365 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:41.365 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.365 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.365 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.365 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.366 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.366 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.366 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.366 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.366 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.366 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.366 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.366 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.366 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.366 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.366 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.366 "name": "Existed_Raid", 00:08:41.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.366 "strip_size_kb": 64, 00:08:41.366 "state": "configuring", 00:08:41.366 "raid_level": "raid0", 00:08:41.366 "superblock": false, 00:08:41.366 "num_base_bdevs": 3, 00:08:41.366 "num_base_bdevs_discovered": 2, 00:08:41.366 "num_base_bdevs_operational": 3, 00:08:41.366 "base_bdevs_list": [ 00:08:41.366 { 00:08:41.366 "name": "BaseBdev1", 00:08:41.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.366 "is_configured": false, 00:08:41.366 "data_offset": 0, 00:08:41.366 "data_size": 0 00:08:41.366 }, 00:08:41.366 { 00:08:41.366 "name": "BaseBdev2", 00:08:41.366 "uuid": "8036f9f0-80ab-4b19-8a9a-5f4a94776543", 00:08:41.366 "is_configured": true, 00:08:41.366 "data_offset": 0, 00:08:41.366 "data_size": 65536 00:08:41.366 }, 00:08:41.366 { 00:08:41.366 "name": "BaseBdev3", 00:08:41.366 "uuid": "6f552363-6779-44ab-be35-89b667534d42", 00:08:41.366 "is_configured": true, 00:08:41.366 "data_offset": 0, 00:08:41.366 "data_size": 65536 00:08:41.366 } 00:08:41.366 ] 00:08:41.366 }' 00:08:41.366 09:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.366 09:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.625 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:41.625 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.625 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.625 [2024-11-15 09:27:30.069522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:41.625 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.625 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:41.625 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.625 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.625 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.625 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.625 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.625 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.625 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.625 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.625 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.625 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.625 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.625 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.625 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.884 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.884 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.884 "name": "Existed_Raid", 00:08:41.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.884 "strip_size_kb": 64, 00:08:41.884 "state": "configuring", 00:08:41.884 "raid_level": "raid0", 00:08:41.884 "superblock": false, 00:08:41.884 "num_base_bdevs": 3, 00:08:41.884 "num_base_bdevs_discovered": 1, 00:08:41.884 "num_base_bdevs_operational": 3, 00:08:41.884 "base_bdevs_list": [ 00:08:41.884 { 00:08:41.884 "name": "BaseBdev1", 00:08:41.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.884 "is_configured": false, 00:08:41.884 "data_offset": 0, 00:08:41.884 "data_size": 0 00:08:41.884 }, 00:08:41.884 { 00:08:41.884 "name": null, 00:08:41.884 "uuid": "8036f9f0-80ab-4b19-8a9a-5f4a94776543", 00:08:41.884 "is_configured": false, 00:08:41.884 "data_offset": 0, 00:08:41.884 "data_size": 65536 00:08:41.884 }, 00:08:41.884 { 00:08:41.884 "name": "BaseBdev3", 00:08:41.884 "uuid": "6f552363-6779-44ab-be35-89b667534d42", 00:08:41.884 "is_configured": true, 00:08:41.884 "data_offset": 0, 00:08:41.884 "data_size": 65536 00:08:41.884 } 00:08:41.884 ] 00:08:41.884 }' 00:08:41.884 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.884 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.144 [2024-11-15 09:27:30.554680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:42.144 BaseBdev1 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.144 [ 00:08:42.144 { 00:08:42.144 "name": "BaseBdev1", 00:08:42.144 "aliases": [ 00:08:42.144 "4447ab3b-bc7a-4727-b2ea-2c275bfce80d" 00:08:42.144 ], 00:08:42.144 "product_name": "Malloc disk", 00:08:42.144 "block_size": 512, 00:08:42.144 "num_blocks": 65536, 00:08:42.144 "uuid": "4447ab3b-bc7a-4727-b2ea-2c275bfce80d", 00:08:42.144 "assigned_rate_limits": { 00:08:42.144 "rw_ios_per_sec": 0, 00:08:42.144 "rw_mbytes_per_sec": 0, 00:08:42.144 "r_mbytes_per_sec": 0, 00:08:42.144 "w_mbytes_per_sec": 0 00:08:42.144 }, 00:08:42.144 "claimed": true, 00:08:42.144 "claim_type": "exclusive_write", 00:08:42.144 "zoned": false, 00:08:42.144 "supported_io_types": { 00:08:42.144 "read": true, 00:08:42.144 "write": true, 00:08:42.144 "unmap": true, 00:08:42.144 "flush": true, 00:08:42.144 "reset": true, 00:08:42.144 "nvme_admin": false, 00:08:42.144 "nvme_io": false, 00:08:42.144 "nvme_io_md": false, 00:08:42.144 "write_zeroes": true, 00:08:42.144 "zcopy": true, 00:08:42.144 "get_zone_info": false, 00:08:42.144 "zone_management": false, 00:08:42.144 "zone_append": false, 00:08:42.144 "compare": false, 00:08:42.144 "compare_and_write": false, 00:08:42.144 "abort": true, 00:08:42.144 "seek_hole": false, 00:08:42.144 "seek_data": false, 00:08:42.144 "copy": true, 00:08:42.144 "nvme_iov_md": false 00:08:42.144 }, 00:08:42.144 "memory_domains": [ 00:08:42.144 { 00:08:42.144 "dma_device_id": "system", 00:08:42.144 "dma_device_type": 1 00:08:42.144 }, 00:08:42.144 { 00:08:42.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.144 "dma_device_type": 2 00:08:42.144 } 00:08:42.144 ], 00:08:42.144 "driver_specific": {} 00:08:42.144 } 00:08:42.144 ] 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.144 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.403 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.403 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.403 "name": "Existed_Raid", 00:08:42.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.403 "strip_size_kb": 64, 00:08:42.403 "state": "configuring", 00:08:42.403 "raid_level": "raid0", 00:08:42.403 "superblock": false, 00:08:42.403 "num_base_bdevs": 3, 00:08:42.403 "num_base_bdevs_discovered": 2, 00:08:42.403 "num_base_bdevs_operational": 3, 00:08:42.403 "base_bdevs_list": [ 00:08:42.403 { 00:08:42.403 "name": "BaseBdev1", 00:08:42.403 "uuid": "4447ab3b-bc7a-4727-b2ea-2c275bfce80d", 00:08:42.403 "is_configured": true, 00:08:42.403 "data_offset": 0, 00:08:42.403 "data_size": 65536 00:08:42.403 }, 00:08:42.403 { 00:08:42.403 "name": null, 00:08:42.403 "uuid": "8036f9f0-80ab-4b19-8a9a-5f4a94776543", 00:08:42.403 "is_configured": false, 00:08:42.403 "data_offset": 0, 00:08:42.403 "data_size": 65536 00:08:42.403 }, 00:08:42.403 { 00:08:42.403 "name": "BaseBdev3", 00:08:42.403 "uuid": "6f552363-6779-44ab-be35-89b667534d42", 00:08:42.403 "is_configured": true, 00:08:42.403 "data_offset": 0, 00:08:42.403 "data_size": 65536 00:08:42.403 } 00:08:42.403 ] 00:08:42.403 }' 00:08:42.403 09:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.403 09:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.663 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.663 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:42.663 09:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.663 09:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.663 09:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.663 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:42.663 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:42.663 09:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.663 09:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.663 [2024-11-15 09:27:31.121839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:42.663 09:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.663 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:42.663 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.922 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.922 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.922 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.922 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.922 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.922 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.922 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.922 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.922 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.922 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.922 09:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.922 09:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.922 09:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.922 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.922 "name": "Existed_Raid", 00:08:42.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.922 "strip_size_kb": 64, 00:08:42.922 "state": "configuring", 00:08:42.922 "raid_level": "raid0", 00:08:42.922 "superblock": false, 00:08:42.922 "num_base_bdevs": 3, 00:08:42.922 "num_base_bdevs_discovered": 1, 00:08:42.922 "num_base_bdevs_operational": 3, 00:08:42.922 "base_bdevs_list": [ 00:08:42.922 { 00:08:42.922 "name": "BaseBdev1", 00:08:42.922 "uuid": "4447ab3b-bc7a-4727-b2ea-2c275bfce80d", 00:08:42.922 "is_configured": true, 00:08:42.922 "data_offset": 0, 00:08:42.922 "data_size": 65536 00:08:42.922 }, 00:08:42.922 { 00:08:42.922 "name": null, 00:08:42.922 "uuid": "8036f9f0-80ab-4b19-8a9a-5f4a94776543", 00:08:42.922 "is_configured": false, 00:08:42.922 "data_offset": 0, 00:08:42.922 "data_size": 65536 00:08:42.922 }, 00:08:42.922 { 00:08:42.922 "name": null, 00:08:42.922 "uuid": "6f552363-6779-44ab-be35-89b667534d42", 00:08:42.922 "is_configured": false, 00:08:42.922 "data_offset": 0, 00:08:42.922 "data_size": 65536 00:08:42.922 } 00:08:42.922 ] 00:08:42.922 }' 00:08:42.922 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.922 09:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.181 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:43.181 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.181 09:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.181 09:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.181 09:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.181 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:43.181 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:43.181 09:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.181 09:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.181 [2024-11-15 09:27:31.613098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:43.181 09:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.181 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:43.181 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.181 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.181 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.181 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.181 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.181 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.181 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.181 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.181 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.181 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.181 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.181 09:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.181 09:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.181 09:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.439 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.439 "name": "Existed_Raid", 00:08:43.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.439 "strip_size_kb": 64, 00:08:43.439 "state": "configuring", 00:08:43.439 "raid_level": "raid0", 00:08:43.439 "superblock": false, 00:08:43.439 "num_base_bdevs": 3, 00:08:43.439 "num_base_bdevs_discovered": 2, 00:08:43.439 "num_base_bdevs_operational": 3, 00:08:43.439 "base_bdevs_list": [ 00:08:43.439 { 00:08:43.439 "name": "BaseBdev1", 00:08:43.439 "uuid": "4447ab3b-bc7a-4727-b2ea-2c275bfce80d", 00:08:43.439 "is_configured": true, 00:08:43.439 "data_offset": 0, 00:08:43.439 "data_size": 65536 00:08:43.439 }, 00:08:43.439 { 00:08:43.439 "name": null, 00:08:43.439 "uuid": "8036f9f0-80ab-4b19-8a9a-5f4a94776543", 00:08:43.439 "is_configured": false, 00:08:43.439 "data_offset": 0, 00:08:43.439 "data_size": 65536 00:08:43.439 }, 00:08:43.439 { 00:08:43.439 "name": "BaseBdev3", 00:08:43.439 "uuid": "6f552363-6779-44ab-be35-89b667534d42", 00:08:43.439 "is_configured": true, 00:08:43.439 "data_offset": 0, 00:08:43.439 "data_size": 65536 00:08:43.439 } 00:08:43.439 ] 00:08:43.439 }' 00:08:43.439 09:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.439 09:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.698 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:43.698 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.698 09:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.698 09:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.698 09:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.698 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:43.698 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:43.698 09:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.698 09:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.698 [2024-11-15 09:27:32.116265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:43.957 09:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.957 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:43.958 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.958 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.958 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.958 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.958 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.958 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.958 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.958 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.958 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.958 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.958 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.958 09:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.958 09:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.958 09:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.958 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.958 "name": "Existed_Raid", 00:08:43.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.958 "strip_size_kb": 64, 00:08:43.958 "state": "configuring", 00:08:43.958 "raid_level": "raid0", 00:08:43.958 "superblock": false, 00:08:43.958 "num_base_bdevs": 3, 00:08:43.958 "num_base_bdevs_discovered": 1, 00:08:43.958 "num_base_bdevs_operational": 3, 00:08:43.958 "base_bdevs_list": [ 00:08:43.958 { 00:08:43.958 "name": null, 00:08:43.958 "uuid": "4447ab3b-bc7a-4727-b2ea-2c275bfce80d", 00:08:43.958 "is_configured": false, 00:08:43.958 "data_offset": 0, 00:08:43.958 "data_size": 65536 00:08:43.958 }, 00:08:43.958 { 00:08:43.958 "name": null, 00:08:43.958 "uuid": "8036f9f0-80ab-4b19-8a9a-5f4a94776543", 00:08:43.958 "is_configured": false, 00:08:43.958 "data_offset": 0, 00:08:43.958 "data_size": 65536 00:08:43.958 }, 00:08:43.958 { 00:08:43.958 "name": "BaseBdev3", 00:08:43.958 "uuid": "6f552363-6779-44ab-be35-89b667534d42", 00:08:43.958 "is_configured": true, 00:08:43.958 "data_offset": 0, 00:08:43.958 "data_size": 65536 00:08:43.958 } 00:08:43.958 ] 00:08:43.958 }' 00:08:43.958 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.958 09:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.218 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.218 09:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.218 09:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.218 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:44.477 09:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.477 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:44.477 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:44.477 09:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.477 09:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.477 [2024-11-15 09:27:32.732014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:44.477 09:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.477 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:44.477 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.477 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.477 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.477 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.477 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.477 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.477 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.477 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.477 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.477 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.477 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.477 09:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.477 09:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.477 09:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.477 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.477 "name": "Existed_Raid", 00:08:44.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.477 "strip_size_kb": 64, 00:08:44.477 "state": "configuring", 00:08:44.477 "raid_level": "raid0", 00:08:44.477 "superblock": false, 00:08:44.477 "num_base_bdevs": 3, 00:08:44.477 "num_base_bdevs_discovered": 2, 00:08:44.477 "num_base_bdevs_operational": 3, 00:08:44.477 "base_bdevs_list": [ 00:08:44.477 { 00:08:44.477 "name": null, 00:08:44.477 "uuid": "4447ab3b-bc7a-4727-b2ea-2c275bfce80d", 00:08:44.477 "is_configured": false, 00:08:44.477 "data_offset": 0, 00:08:44.477 "data_size": 65536 00:08:44.477 }, 00:08:44.477 { 00:08:44.477 "name": "BaseBdev2", 00:08:44.477 "uuid": "8036f9f0-80ab-4b19-8a9a-5f4a94776543", 00:08:44.477 "is_configured": true, 00:08:44.477 "data_offset": 0, 00:08:44.477 "data_size": 65536 00:08:44.477 }, 00:08:44.477 { 00:08:44.477 "name": "BaseBdev3", 00:08:44.477 "uuid": "6f552363-6779-44ab-be35-89b667534d42", 00:08:44.477 "is_configured": true, 00:08:44.477 "data_offset": 0, 00:08:44.477 "data_size": 65536 00:08:44.477 } 00:08:44.477 ] 00:08:44.477 }' 00:08:44.477 09:27:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.477 09:27:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.736 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.736 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.736 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:44.736 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.736 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4447ab3b-bc7a-4727-b2ea-2c275bfce80d 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.994 [2024-11-15 09:27:33.302779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:44.994 [2024-11-15 09:27:33.302835] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:44.994 [2024-11-15 09:27:33.302844] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:44.994 [2024-11-15 09:27:33.303168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:44.994 [2024-11-15 09:27:33.303341] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:44.994 [2024-11-15 09:27:33.303360] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:44.994 [2024-11-15 09:27:33.303621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.994 NewBaseBdev 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.994 [ 00:08:44.994 { 00:08:44.994 "name": "NewBaseBdev", 00:08:44.994 "aliases": [ 00:08:44.994 "4447ab3b-bc7a-4727-b2ea-2c275bfce80d" 00:08:44.994 ], 00:08:44.994 "product_name": "Malloc disk", 00:08:44.994 "block_size": 512, 00:08:44.994 "num_blocks": 65536, 00:08:44.994 "uuid": "4447ab3b-bc7a-4727-b2ea-2c275bfce80d", 00:08:44.994 "assigned_rate_limits": { 00:08:44.994 "rw_ios_per_sec": 0, 00:08:44.994 "rw_mbytes_per_sec": 0, 00:08:44.994 "r_mbytes_per_sec": 0, 00:08:44.994 "w_mbytes_per_sec": 0 00:08:44.994 }, 00:08:44.994 "claimed": true, 00:08:44.994 "claim_type": "exclusive_write", 00:08:44.994 "zoned": false, 00:08:44.994 "supported_io_types": { 00:08:44.994 "read": true, 00:08:44.994 "write": true, 00:08:44.994 "unmap": true, 00:08:44.994 "flush": true, 00:08:44.994 "reset": true, 00:08:44.994 "nvme_admin": false, 00:08:44.994 "nvme_io": false, 00:08:44.994 "nvme_io_md": false, 00:08:44.994 "write_zeroes": true, 00:08:44.994 "zcopy": true, 00:08:44.994 "get_zone_info": false, 00:08:44.994 "zone_management": false, 00:08:44.994 "zone_append": false, 00:08:44.994 "compare": false, 00:08:44.994 "compare_and_write": false, 00:08:44.994 "abort": true, 00:08:44.994 "seek_hole": false, 00:08:44.994 "seek_data": false, 00:08:44.994 "copy": true, 00:08:44.994 "nvme_iov_md": false 00:08:44.994 }, 00:08:44.994 "memory_domains": [ 00:08:44.994 { 00:08:44.994 "dma_device_id": "system", 00:08:44.994 "dma_device_type": 1 00:08:44.994 }, 00:08:44.994 { 00:08:44.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.994 "dma_device_type": 2 00:08:44.994 } 00:08:44.994 ], 00:08:44.994 "driver_specific": {} 00:08:44.994 } 00:08:44.994 ] 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.994 "name": "Existed_Raid", 00:08:44.994 "uuid": "62f79ba8-124a-4887-90b8-be466ad3f142", 00:08:44.994 "strip_size_kb": 64, 00:08:44.994 "state": "online", 00:08:44.994 "raid_level": "raid0", 00:08:44.994 "superblock": false, 00:08:44.994 "num_base_bdevs": 3, 00:08:44.994 "num_base_bdevs_discovered": 3, 00:08:44.994 "num_base_bdevs_operational": 3, 00:08:44.994 "base_bdevs_list": [ 00:08:44.994 { 00:08:44.994 "name": "NewBaseBdev", 00:08:44.994 "uuid": "4447ab3b-bc7a-4727-b2ea-2c275bfce80d", 00:08:44.994 "is_configured": true, 00:08:44.994 "data_offset": 0, 00:08:44.994 "data_size": 65536 00:08:44.994 }, 00:08:44.994 { 00:08:44.994 "name": "BaseBdev2", 00:08:44.994 "uuid": "8036f9f0-80ab-4b19-8a9a-5f4a94776543", 00:08:44.994 "is_configured": true, 00:08:44.994 "data_offset": 0, 00:08:44.994 "data_size": 65536 00:08:44.994 }, 00:08:44.994 { 00:08:44.994 "name": "BaseBdev3", 00:08:44.994 "uuid": "6f552363-6779-44ab-be35-89b667534d42", 00:08:44.994 "is_configured": true, 00:08:44.994 "data_offset": 0, 00:08:44.994 "data_size": 65536 00:08:44.994 } 00:08:44.994 ] 00:08:44.994 }' 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.994 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.560 [2024-11-15 09:27:33.822299] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:45.560 "name": "Existed_Raid", 00:08:45.560 "aliases": [ 00:08:45.560 "62f79ba8-124a-4887-90b8-be466ad3f142" 00:08:45.560 ], 00:08:45.560 "product_name": "Raid Volume", 00:08:45.560 "block_size": 512, 00:08:45.560 "num_blocks": 196608, 00:08:45.560 "uuid": "62f79ba8-124a-4887-90b8-be466ad3f142", 00:08:45.560 "assigned_rate_limits": { 00:08:45.560 "rw_ios_per_sec": 0, 00:08:45.560 "rw_mbytes_per_sec": 0, 00:08:45.560 "r_mbytes_per_sec": 0, 00:08:45.560 "w_mbytes_per_sec": 0 00:08:45.560 }, 00:08:45.560 "claimed": false, 00:08:45.560 "zoned": false, 00:08:45.560 "supported_io_types": { 00:08:45.560 "read": true, 00:08:45.560 "write": true, 00:08:45.560 "unmap": true, 00:08:45.560 "flush": true, 00:08:45.560 "reset": true, 00:08:45.560 "nvme_admin": false, 00:08:45.560 "nvme_io": false, 00:08:45.560 "nvme_io_md": false, 00:08:45.560 "write_zeroes": true, 00:08:45.560 "zcopy": false, 00:08:45.560 "get_zone_info": false, 00:08:45.560 "zone_management": false, 00:08:45.560 "zone_append": false, 00:08:45.560 "compare": false, 00:08:45.560 "compare_and_write": false, 00:08:45.560 "abort": false, 00:08:45.560 "seek_hole": false, 00:08:45.560 "seek_data": false, 00:08:45.560 "copy": false, 00:08:45.560 "nvme_iov_md": false 00:08:45.560 }, 00:08:45.560 "memory_domains": [ 00:08:45.560 { 00:08:45.560 "dma_device_id": "system", 00:08:45.560 "dma_device_type": 1 00:08:45.560 }, 00:08:45.560 { 00:08:45.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.560 "dma_device_type": 2 00:08:45.560 }, 00:08:45.560 { 00:08:45.560 "dma_device_id": "system", 00:08:45.560 "dma_device_type": 1 00:08:45.560 }, 00:08:45.560 { 00:08:45.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.560 "dma_device_type": 2 00:08:45.560 }, 00:08:45.560 { 00:08:45.560 "dma_device_id": "system", 00:08:45.560 "dma_device_type": 1 00:08:45.560 }, 00:08:45.560 { 00:08:45.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.560 "dma_device_type": 2 00:08:45.560 } 00:08:45.560 ], 00:08:45.560 "driver_specific": { 00:08:45.560 "raid": { 00:08:45.560 "uuid": "62f79ba8-124a-4887-90b8-be466ad3f142", 00:08:45.560 "strip_size_kb": 64, 00:08:45.560 "state": "online", 00:08:45.560 "raid_level": "raid0", 00:08:45.560 "superblock": false, 00:08:45.560 "num_base_bdevs": 3, 00:08:45.560 "num_base_bdevs_discovered": 3, 00:08:45.560 "num_base_bdevs_operational": 3, 00:08:45.560 "base_bdevs_list": [ 00:08:45.560 { 00:08:45.560 "name": "NewBaseBdev", 00:08:45.560 "uuid": "4447ab3b-bc7a-4727-b2ea-2c275bfce80d", 00:08:45.560 "is_configured": true, 00:08:45.560 "data_offset": 0, 00:08:45.560 "data_size": 65536 00:08:45.560 }, 00:08:45.560 { 00:08:45.560 "name": "BaseBdev2", 00:08:45.560 "uuid": "8036f9f0-80ab-4b19-8a9a-5f4a94776543", 00:08:45.560 "is_configured": true, 00:08:45.560 "data_offset": 0, 00:08:45.560 "data_size": 65536 00:08:45.560 }, 00:08:45.560 { 00:08:45.560 "name": "BaseBdev3", 00:08:45.560 "uuid": "6f552363-6779-44ab-be35-89b667534d42", 00:08:45.560 "is_configured": true, 00:08:45.560 "data_offset": 0, 00:08:45.560 "data_size": 65536 00:08:45.560 } 00:08:45.560 ] 00:08:45.560 } 00:08:45.560 } 00:08:45.560 }' 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:45.560 BaseBdev2 00:08:45.560 BaseBdev3' 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.560 09:27:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.560 09:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.818 09:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.818 09:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.818 09:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:45.818 09:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.818 09:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.818 [2024-11-15 09:27:34.049624] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:45.818 [2024-11-15 09:27:34.049762] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:45.818 [2024-11-15 09:27:34.049886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.818 [2024-11-15 09:27:34.049944] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.818 [2024-11-15 09:27:34.049957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:45.818 09:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.818 09:27:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64112 00:08:45.818 09:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 64112 ']' 00:08:45.818 09:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 64112 00:08:45.818 09:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:08:45.818 09:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:45.818 09:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64112 00:08:45.818 killing process with pid 64112 00:08:45.818 09:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:45.818 09:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:45.818 09:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64112' 00:08:45.818 09:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 64112 00:08:45.818 [2024-11-15 09:27:34.098247] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:45.818 09:27:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 64112 00:08:46.076 [2024-11-15 09:27:34.434804] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:47.486 ************************************ 00:08:47.486 END TEST raid_state_function_test 00:08:47.486 ************************************ 00:08:47.486 00:08:47.486 real 0m11.068s 00:08:47.486 user 0m17.440s 00:08:47.486 sys 0m2.048s 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.486 09:27:35 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:47.486 09:27:35 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:47.486 09:27:35 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:47.486 09:27:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.486 ************************************ 00:08:47.486 START TEST raid_state_function_test_sb 00:08:47.486 ************************************ 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 true 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64739 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64739' 00:08:47.486 Process raid pid: 64739 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64739 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 64739 ']' 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:47.486 09:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.486 [2024-11-15 09:27:35.860859] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:08:47.486 [2024-11-15 09:27:35.861126] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.768 [2024-11-15 09:27:36.045008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.768 [2024-11-15 09:27:36.170724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.043 [2024-11-15 09:27:36.394603] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.043 [2024-11-15 09:27:36.394750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.611 09:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:48.611 09:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:08:48.611 09:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:48.611 09:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.611 09:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.611 [2024-11-15 09:27:36.833270] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:48.611 [2024-11-15 09:27:36.833423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:48.611 [2024-11-15 09:27:36.833439] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.611 [2024-11-15 09:27:36.833450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.611 [2024-11-15 09:27:36.833458] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:48.611 [2024-11-15 09:27:36.833467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:48.612 09:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.612 09:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:48.612 09:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.612 09:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.612 09:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.612 09:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.612 09:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.612 09:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.612 09:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.612 09:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.612 09:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.612 09:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.612 09:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.612 09:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.612 09:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.612 09:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.612 09:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.612 "name": "Existed_Raid", 00:08:48.612 "uuid": "3efa1cfc-9b58-42ea-9dd3-bf9e80bcd4c2", 00:08:48.612 "strip_size_kb": 64, 00:08:48.612 "state": "configuring", 00:08:48.612 "raid_level": "raid0", 00:08:48.612 "superblock": true, 00:08:48.612 "num_base_bdevs": 3, 00:08:48.612 "num_base_bdevs_discovered": 0, 00:08:48.612 "num_base_bdevs_operational": 3, 00:08:48.612 "base_bdevs_list": [ 00:08:48.612 { 00:08:48.612 "name": "BaseBdev1", 00:08:48.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.612 "is_configured": false, 00:08:48.612 "data_offset": 0, 00:08:48.612 "data_size": 0 00:08:48.612 }, 00:08:48.612 { 00:08:48.612 "name": "BaseBdev2", 00:08:48.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.612 "is_configured": false, 00:08:48.612 "data_offset": 0, 00:08:48.612 "data_size": 0 00:08:48.612 }, 00:08:48.612 { 00:08:48.612 "name": "BaseBdev3", 00:08:48.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.612 "is_configured": false, 00:08:48.612 "data_offset": 0, 00:08:48.612 "data_size": 0 00:08:48.612 } 00:08:48.612 ] 00:08:48.612 }' 00:08:48.612 09:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.612 09:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.871 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:48.871 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.871 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.871 [2024-11-15 09:27:37.336353] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:49.131 [2024-11-15 09:27:37.336491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.131 [2024-11-15 09:27:37.348351] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:49.131 [2024-11-15 09:27:37.348412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:49.131 [2024-11-15 09:27:37.348423] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:49.131 [2024-11-15 09:27:37.348434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:49.131 [2024-11-15 09:27:37.348442] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:49.131 [2024-11-15 09:27:37.348453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.131 [2024-11-15 09:27:37.397398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.131 BaseBdev1 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.131 [ 00:08:49.131 { 00:08:49.131 "name": "BaseBdev1", 00:08:49.131 "aliases": [ 00:08:49.131 "88cb6425-9d3a-419d-ade8-f45960b821be" 00:08:49.131 ], 00:08:49.131 "product_name": "Malloc disk", 00:08:49.131 "block_size": 512, 00:08:49.131 "num_blocks": 65536, 00:08:49.131 "uuid": "88cb6425-9d3a-419d-ade8-f45960b821be", 00:08:49.131 "assigned_rate_limits": { 00:08:49.131 "rw_ios_per_sec": 0, 00:08:49.131 "rw_mbytes_per_sec": 0, 00:08:49.131 "r_mbytes_per_sec": 0, 00:08:49.131 "w_mbytes_per_sec": 0 00:08:49.131 }, 00:08:49.131 "claimed": true, 00:08:49.131 "claim_type": "exclusive_write", 00:08:49.131 "zoned": false, 00:08:49.131 "supported_io_types": { 00:08:49.131 "read": true, 00:08:49.131 "write": true, 00:08:49.131 "unmap": true, 00:08:49.131 "flush": true, 00:08:49.131 "reset": true, 00:08:49.131 "nvme_admin": false, 00:08:49.131 "nvme_io": false, 00:08:49.131 "nvme_io_md": false, 00:08:49.131 "write_zeroes": true, 00:08:49.131 "zcopy": true, 00:08:49.131 "get_zone_info": false, 00:08:49.131 "zone_management": false, 00:08:49.131 "zone_append": false, 00:08:49.131 "compare": false, 00:08:49.131 "compare_and_write": false, 00:08:49.131 "abort": true, 00:08:49.131 "seek_hole": false, 00:08:49.131 "seek_data": false, 00:08:49.131 "copy": true, 00:08:49.131 "nvme_iov_md": false 00:08:49.131 }, 00:08:49.131 "memory_domains": [ 00:08:49.131 { 00:08:49.131 "dma_device_id": "system", 00:08:49.131 "dma_device_type": 1 00:08:49.131 }, 00:08:49.131 { 00:08:49.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.131 "dma_device_type": 2 00:08:49.131 } 00:08:49.131 ], 00:08:49.131 "driver_specific": {} 00:08:49.131 } 00:08:49.131 ] 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.131 "name": "Existed_Raid", 00:08:49.131 "uuid": "7cdb3cdc-c33c-4ca0-8358-2ec6f55e47f6", 00:08:49.131 "strip_size_kb": 64, 00:08:49.131 "state": "configuring", 00:08:49.131 "raid_level": "raid0", 00:08:49.131 "superblock": true, 00:08:49.131 "num_base_bdevs": 3, 00:08:49.131 "num_base_bdevs_discovered": 1, 00:08:49.131 "num_base_bdevs_operational": 3, 00:08:49.131 "base_bdevs_list": [ 00:08:49.131 { 00:08:49.131 "name": "BaseBdev1", 00:08:49.131 "uuid": "88cb6425-9d3a-419d-ade8-f45960b821be", 00:08:49.131 "is_configured": true, 00:08:49.131 "data_offset": 2048, 00:08:49.131 "data_size": 63488 00:08:49.131 }, 00:08:49.131 { 00:08:49.131 "name": "BaseBdev2", 00:08:49.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.131 "is_configured": false, 00:08:49.131 "data_offset": 0, 00:08:49.131 "data_size": 0 00:08:49.131 }, 00:08:49.131 { 00:08:49.131 "name": "BaseBdev3", 00:08:49.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.131 "is_configured": false, 00:08:49.131 "data_offset": 0, 00:08:49.131 "data_size": 0 00:08:49.131 } 00:08:49.131 ] 00:08:49.131 }' 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.131 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.700 [2024-11-15 09:27:37.868716] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:49.700 [2024-11-15 09:27:37.868792] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.700 [2024-11-15 09:27:37.880761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.700 [2024-11-15 09:27:37.882787] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:49.700 [2024-11-15 09:27:37.882832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:49.700 [2024-11-15 09:27:37.882842] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:49.700 [2024-11-15 09:27:37.882861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.700 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.700 "name": "Existed_Raid", 00:08:49.700 "uuid": "47bdbb3d-cf58-44f4-bef7-61e3609590e6", 00:08:49.701 "strip_size_kb": 64, 00:08:49.701 "state": "configuring", 00:08:49.701 "raid_level": "raid0", 00:08:49.701 "superblock": true, 00:08:49.701 "num_base_bdevs": 3, 00:08:49.701 "num_base_bdevs_discovered": 1, 00:08:49.701 "num_base_bdevs_operational": 3, 00:08:49.701 "base_bdevs_list": [ 00:08:49.701 { 00:08:49.701 "name": "BaseBdev1", 00:08:49.701 "uuid": "88cb6425-9d3a-419d-ade8-f45960b821be", 00:08:49.701 "is_configured": true, 00:08:49.701 "data_offset": 2048, 00:08:49.701 "data_size": 63488 00:08:49.701 }, 00:08:49.701 { 00:08:49.701 "name": "BaseBdev2", 00:08:49.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.701 "is_configured": false, 00:08:49.701 "data_offset": 0, 00:08:49.701 "data_size": 0 00:08:49.701 }, 00:08:49.701 { 00:08:49.701 "name": "BaseBdev3", 00:08:49.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.701 "is_configured": false, 00:08:49.701 "data_offset": 0, 00:08:49.701 "data_size": 0 00:08:49.701 } 00:08:49.701 ] 00:08:49.701 }' 00:08:49.701 09:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.701 09:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.959 09:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:49.959 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.959 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.959 [2024-11-15 09:27:38.404801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:49.959 BaseBdev2 00:08:49.959 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.959 09:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:49.959 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:49.959 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:49.959 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:49.959 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:49.959 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:49.959 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:49.959 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.959 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.217 [ 00:08:50.217 { 00:08:50.217 "name": "BaseBdev2", 00:08:50.217 "aliases": [ 00:08:50.217 "4d13c6e9-4147-4386-aaff-be7c24265b84" 00:08:50.217 ], 00:08:50.217 "product_name": "Malloc disk", 00:08:50.217 "block_size": 512, 00:08:50.217 "num_blocks": 65536, 00:08:50.217 "uuid": "4d13c6e9-4147-4386-aaff-be7c24265b84", 00:08:50.217 "assigned_rate_limits": { 00:08:50.217 "rw_ios_per_sec": 0, 00:08:50.217 "rw_mbytes_per_sec": 0, 00:08:50.217 "r_mbytes_per_sec": 0, 00:08:50.217 "w_mbytes_per_sec": 0 00:08:50.217 }, 00:08:50.217 "claimed": true, 00:08:50.217 "claim_type": "exclusive_write", 00:08:50.217 "zoned": false, 00:08:50.217 "supported_io_types": { 00:08:50.217 "read": true, 00:08:50.217 "write": true, 00:08:50.217 "unmap": true, 00:08:50.217 "flush": true, 00:08:50.217 "reset": true, 00:08:50.217 "nvme_admin": false, 00:08:50.217 "nvme_io": false, 00:08:50.217 "nvme_io_md": false, 00:08:50.217 "write_zeroes": true, 00:08:50.217 "zcopy": true, 00:08:50.217 "get_zone_info": false, 00:08:50.217 "zone_management": false, 00:08:50.217 "zone_append": false, 00:08:50.217 "compare": false, 00:08:50.217 "compare_and_write": false, 00:08:50.217 "abort": true, 00:08:50.217 "seek_hole": false, 00:08:50.217 "seek_data": false, 00:08:50.217 "copy": true, 00:08:50.217 "nvme_iov_md": false 00:08:50.217 }, 00:08:50.217 "memory_domains": [ 00:08:50.217 { 00:08:50.217 "dma_device_id": "system", 00:08:50.217 "dma_device_type": 1 00:08:50.217 }, 00:08:50.217 { 00:08:50.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.217 "dma_device_type": 2 00:08:50.217 } 00:08:50.217 ], 00:08:50.217 "driver_specific": {} 00:08:50.217 } 00:08:50.217 ] 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.217 "name": "Existed_Raid", 00:08:50.217 "uuid": "47bdbb3d-cf58-44f4-bef7-61e3609590e6", 00:08:50.217 "strip_size_kb": 64, 00:08:50.217 "state": "configuring", 00:08:50.217 "raid_level": "raid0", 00:08:50.217 "superblock": true, 00:08:50.217 "num_base_bdevs": 3, 00:08:50.217 "num_base_bdevs_discovered": 2, 00:08:50.217 "num_base_bdevs_operational": 3, 00:08:50.217 "base_bdevs_list": [ 00:08:50.217 { 00:08:50.217 "name": "BaseBdev1", 00:08:50.217 "uuid": "88cb6425-9d3a-419d-ade8-f45960b821be", 00:08:50.217 "is_configured": true, 00:08:50.217 "data_offset": 2048, 00:08:50.217 "data_size": 63488 00:08:50.217 }, 00:08:50.217 { 00:08:50.217 "name": "BaseBdev2", 00:08:50.217 "uuid": "4d13c6e9-4147-4386-aaff-be7c24265b84", 00:08:50.217 "is_configured": true, 00:08:50.217 "data_offset": 2048, 00:08:50.217 "data_size": 63488 00:08:50.217 }, 00:08:50.217 { 00:08:50.217 "name": "BaseBdev3", 00:08:50.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.217 "is_configured": false, 00:08:50.217 "data_offset": 0, 00:08:50.217 "data_size": 0 00:08:50.217 } 00:08:50.217 ] 00:08:50.217 }' 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.217 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.475 09:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:50.475 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.475 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.734 [2024-11-15 09:27:38.994022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:50.734 [2024-11-15 09:27:38.994402] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:50.734 [2024-11-15 09:27:38.994466] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:50.734 [2024-11-15 09:27:38.994757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:50.734 [2024-11-15 09:27:38.994983] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:50.734 BaseBdev3 00:08:50.734 [2024-11-15 09:27:38.995035] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:50.734 [2024-11-15 09:27:38.995243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.734 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.734 09:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:50.734 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:50.734 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:50.734 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:50.734 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:50.734 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:50.734 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:50.734 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.734 09:27:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.734 [ 00:08:50.734 { 00:08:50.734 "name": "BaseBdev3", 00:08:50.734 "aliases": [ 00:08:50.734 "ec7bf42b-c74a-445c-8f06-e9b94c818086" 00:08:50.734 ], 00:08:50.734 "product_name": "Malloc disk", 00:08:50.734 "block_size": 512, 00:08:50.734 "num_blocks": 65536, 00:08:50.734 "uuid": "ec7bf42b-c74a-445c-8f06-e9b94c818086", 00:08:50.734 "assigned_rate_limits": { 00:08:50.734 "rw_ios_per_sec": 0, 00:08:50.734 "rw_mbytes_per_sec": 0, 00:08:50.734 "r_mbytes_per_sec": 0, 00:08:50.734 "w_mbytes_per_sec": 0 00:08:50.734 }, 00:08:50.734 "claimed": true, 00:08:50.734 "claim_type": "exclusive_write", 00:08:50.734 "zoned": false, 00:08:50.734 "supported_io_types": { 00:08:50.734 "read": true, 00:08:50.734 "write": true, 00:08:50.734 "unmap": true, 00:08:50.734 "flush": true, 00:08:50.734 "reset": true, 00:08:50.734 "nvme_admin": false, 00:08:50.734 "nvme_io": false, 00:08:50.734 "nvme_io_md": false, 00:08:50.734 "write_zeroes": true, 00:08:50.734 "zcopy": true, 00:08:50.734 "get_zone_info": false, 00:08:50.734 "zone_management": false, 00:08:50.734 "zone_append": false, 00:08:50.734 "compare": false, 00:08:50.734 "compare_and_write": false, 00:08:50.734 "abort": true, 00:08:50.734 "seek_hole": false, 00:08:50.734 "seek_data": false, 00:08:50.734 "copy": true, 00:08:50.734 "nvme_iov_md": false 00:08:50.734 }, 00:08:50.734 "memory_domains": [ 00:08:50.734 { 00:08:50.734 "dma_device_id": "system", 00:08:50.734 "dma_device_type": 1 00:08:50.734 }, 00:08:50.734 { 00:08:50.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.734 "dma_device_type": 2 00:08:50.734 } 00:08:50.734 ], 00:08:50.734 "driver_specific": {} 00:08:50.734 } 00:08:50.734 ] 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.734 "name": "Existed_Raid", 00:08:50.734 "uuid": "47bdbb3d-cf58-44f4-bef7-61e3609590e6", 00:08:50.734 "strip_size_kb": 64, 00:08:50.734 "state": "online", 00:08:50.734 "raid_level": "raid0", 00:08:50.734 "superblock": true, 00:08:50.734 "num_base_bdevs": 3, 00:08:50.734 "num_base_bdevs_discovered": 3, 00:08:50.734 "num_base_bdevs_operational": 3, 00:08:50.734 "base_bdevs_list": [ 00:08:50.734 { 00:08:50.734 "name": "BaseBdev1", 00:08:50.734 "uuid": "88cb6425-9d3a-419d-ade8-f45960b821be", 00:08:50.734 "is_configured": true, 00:08:50.734 "data_offset": 2048, 00:08:50.734 "data_size": 63488 00:08:50.734 }, 00:08:50.734 { 00:08:50.734 "name": "BaseBdev2", 00:08:50.734 "uuid": "4d13c6e9-4147-4386-aaff-be7c24265b84", 00:08:50.734 "is_configured": true, 00:08:50.734 "data_offset": 2048, 00:08:50.734 "data_size": 63488 00:08:50.734 }, 00:08:50.734 { 00:08:50.734 "name": "BaseBdev3", 00:08:50.734 "uuid": "ec7bf42b-c74a-445c-8f06-e9b94c818086", 00:08:50.734 "is_configured": true, 00:08:50.734 "data_offset": 2048, 00:08:50.734 "data_size": 63488 00:08:50.734 } 00:08:50.734 ] 00:08:50.734 }' 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.734 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.305 [2024-11-15 09:27:39.501538] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:51.305 "name": "Existed_Raid", 00:08:51.305 "aliases": [ 00:08:51.305 "47bdbb3d-cf58-44f4-bef7-61e3609590e6" 00:08:51.305 ], 00:08:51.305 "product_name": "Raid Volume", 00:08:51.305 "block_size": 512, 00:08:51.305 "num_blocks": 190464, 00:08:51.305 "uuid": "47bdbb3d-cf58-44f4-bef7-61e3609590e6", 00:08:51.305 "assigned_rate_limits": { 00:08:51.305 "rw_ios_per_sec": 0, 00:08:51.305 "rw_mbytes_per_sec": 0, 00:08:51.305 "r_mbytes_per_sec": 0, 00:08:51.305 "w_mbytes_per_sec": 0 00:08:51.305 }, 00:08:51.305 "claimed": false, 00:08:51.305 "zoned": false, 00:08:51.305 "supported_io_types": { 00:08:51.305 "read": true, 00:08:51.305 "write": true, 00:08:51.305 "unmap": true, 00:08:51.305 "flush": true, 00:08:51.305 "reset": true, 00:08:51.305 "nvme_admin": false, 00:08:51.305 "nvme_io": false, 00:08:51.305 "nvme_io_md": false, 00:08:51.305 "write_zeroes": true, 00:08:51.305 "zcopy": false, 00:08:51.305 "get_zone_info": false, 00:08:51.305 "zone_management": false, 00:08:51.305 "zone_append": false, 00:08:51.305 "compare": false, 00:08:51.305 "compare_and_write": false, 00:08:51.305 "abort": false, 00:08:51.305 "seek_hole": false, 00:08:51.305 "seek_data": false, 00:08:51.305 "copy": false, 00:08:51.305 "nvme_iov_md": false 00:08:51.305 }, 00:08:51.305 "memory_domains": [ 00:08:51.305 { 00:08:51.305 "dma_device_id": "system", 00:08:51.305 "dma_device_type": 1 00:08:51.305 }, 00:08:51.305 { 00:08:51.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.305 "dma_device_type": 2 00:08:51.305 }, 00:08:51.305 { 00:08:51.305 "dma_device_id": "system", 00:08:51.305 "dma_device_type": 1 00:08:51.305 }, 00:08:51.305 { 00:08:51.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.305 "dma_device_type": 2 00:08:51.305 }, 00:08:51.305 { 00:08:51.305 "dma_device_id": "system", 00:08:51.305 "dma_device_type": 1 00:08:51.305 }, 00:08:51.305 { 00:08:51.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.305 "dma_device_type": 2 00:08:51.305 } 00:08:51.305 ], 00:08:51.305 "driver_specific": { 00:08:51.305 "raid": { 00:08:51.305 "uuid": "47bdbb3d-cf58-44f4-bef7-61e3609590e6", 00:08:51.305 "strip_size_kb": 64, 00:08:51.305 "state": "online", 00:08:51.305 "raid_level": "raid0", 00:08:51.305 "superblock": true, 00:08:51.305 "num_base_bdevs": 3, 00:08:51.305 "num_base_bdevs_discovered": 3, 00:08:51.305 "num_base_bdevs_operational": 3, 00:08:51.305 "base_bdevs_list": [ 00:08:51.305 { 00:08:51.305 "name": "BaseBdev1", 00:08:51.305 "uuid": "88cb6425-9d3a-419d-ade8-f45960b821be", 00:08:51.305 "is_configured": true, 00:08:51.305 "data_offset": 2048, 00:08:51.305 "data_size": 63488 00:08:51.305 }, 00:08:51.305 { 00:08:51.305 "name": "BaseBdev2", 00:08:51.305 "uuid": "4d13c6e9-4147-4386-aaff-be7c24265b84", 00:08:51.305 "is_configured": true, 00:08:51.305 "data_offset": 2048, 00:08:51.305 "data_size": 63488 00:08:51.305 }, 00:08:51.305 { 00:08:51.305 "name": "BaseBdev3", 00:08:51.305 "uuid": "ec7bf42b-c74a-445c-8f06-e9b94c818086", 00:08:51.305 "is_configured": true, 00:08:51.305 "data_offset": 2048, 00:08:51.305 "data_size": 63488 00:08:51.305 } 00:08:51.305 ] 00:08:51.305 } 00:08:51.305 } 00:08:51.305 }' 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:51.305 BaseBdev2 00:08:51.305 BaseBdev3' 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.305 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.565 [2024-11-15 09:27:39.784866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:51.565 [2024-11-15 09:27:39.784913] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.565 [2024-11-15 09:27:39.784981] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.565 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.565 "name": "Existed_Raid", 00:08:51.566 "uuid": "47bdbb3d-cf58-44f4-bef7-61e3609590e6", 00:08:51.566 "strip_size_kb": 64, 00:08:51.566 "state": "offline", 00:08:51.566 "raid_level": "raid0", 00:08:51.566 "superblock": true, 00:08:51.566 "num_base_bdevs": 3, 00:08:51.566 "num_base_bdevs_discovered": 2, 00:08:51.566 "num_base_bdevs_operational": 2, 00:08:51.566 "base_bdevs_list": [ 00:08:51.566 { 00:08:51.566 "name": null, 00:08:51.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.566 "is_configured": false, 00:08:51.566 "data_offset": 0, 00:08:51.566 "data_size": 63488 00:08:51.566 }, 00:08:51.566 { 00:08:51.566 "name": "BaseBdev2", 00:08:51.566 "uuid": "4d13c6e9-4147-4386-aaff-be7c24265b84", 00:08:51.566 "is_configured": true, 00:08:51.566 "data_offset": 2048, 00:08:51.566 "data_size": 63488 00:08:51.566 }, 00:08:51.566 { 00:08:51.566 "name": "BaseBdev3", 00:08:51.566 "uuid": "ec7bf42b-c74a-445c-8f06-e9b94c818086", 00:08:51.566 "is_configured": true, 00:08:51.566 "data_offset": 2048, 00:08:51.566 "data_size": 63488 00:08:51.566 } 00:08:51.566 ] 00:08:51.566 }' 00:08:51.566 09:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.566 09:27:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.134 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:52.134 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:52.134 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:52.134 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.134 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.134 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.134 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.134 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:52.134 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:52.134 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:52.134 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.134 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.134 [2024-11-15 09:27:40.406476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:52.134 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.134 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:52.134 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:52.134 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.134 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.134 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.134 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:52.134 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.134 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:52.135 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:52.135 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:52.135 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.135 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.135 [2024-11-15 09:27:40.569131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:52.135 [2024-11-15 09:27:40.569331] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.394 BaseBdev2 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.394 [ 00:08:52.394 { 00:08:52.394 "name": "BaseBdev2", 00:08:52.394 "aliases": [ 00:08:52.394 "56b25f24-5b36-408d-ba74-97ad8d102013" 00:08:52.394 ], 00:08:52.394 "product_name": "Malloc disk", 00:08:52.394 "block_size": 512, 00:08:52.394 "num_blocks": 65536, 00:08:52.394 "uuid": "56b25f24-5b36-408d-ba74-97ad8d102013", 00:08:52.394 "assigned_rate_limits": { 00:08:52.394 "rw_ios_per_sec": 0, 00:08:52.394 "rw_mbytes_per_sec": 0, 00:08:52.394 "r_mbytes_per_sec": 0, 00:08:52.394 "w_mbytes_per_sec": 0 00:08:52.394 }, 00:08:52.394 "claimed": false, 00:08:52.394 "zoned": false, 00:08:52.394 "supported_io_types": { 00:08:52.394 "read": true, 00:08:52.394 "write": true, 00:08:52.394 "unmap": true, 00:08:52.394 "flush": true, 00:08:52.394 "reset": true, 00:08:52.394 "nvme_admin": false, 00:08:52.394 "nvme_io": false, 00:08:52.394 "nvme_io_md": false, 00:08:52.394 "write_zeroes": true, 00:08:52.394 "zcopy": true, 00:08:52.394 "get_zone_info": false, 00:08:52.394 "zone_management": false, 00:08:52.394 "zone_append": false, 00:08:52.394 "compare": false, 00:08:52.394 "compare_and_write": false, 00:08:52.394 "abort": true, 00:08:52.394 "seek_hole": false, 00:08:52.394 "seek_data": false, 00:08:52.394 "copy": true, 00:08:52.394 "nvme_iov_md": false 00:08:52.394 }, 00:08:52.394 "memory_domains": [ 00:08:52.394 { 00:08:52.394 "dma_device_id": "system", 00:08:52.394 "dma_device_type": 1 00:08:52.394 }, 00:08:52.394 { 00:08:52.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.394 "dma_device_type": 2 00:08:52.394 } 00:08:52.394 ], 00:08:52.394 "driver_specific": {} 00:08:52.394 } 00:08:52.394 ] 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.394 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.653 BaseBdev3 00:08:52.653 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.653 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:52.653 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:08:52.653 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:52.653 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:52.653 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:52.653 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:52.653 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:52.653 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.653 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.653 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.653 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:52.653 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.653 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.653 [ 00:08:52.653 { 00:08:52.653 "name": "BaseBdev3", 00:08:52.653 "aliases": [ 00:08:52.653 "9a4a819e-99ab-4e40-b9a5-90f134a4103e" 00:08:52.653 ], 00:08:52.653 "product_name": "Malloc disk", 00:08:52.653 "block_size": 512, 00:08:52.653 "num_blocks": 65536, 00:08:52.653 "uuid": "9a4a819e-99ab-4e40-b9a5-90f134a4103e", 00:08:52.653 "assigned_rate_limits": { 00:08:52.653 "rw_ios_per_sec": 0, 00:08:52.653 "rw_mbytes_per_sec": 0, 00:08:52.653 "r_mbytes_per_sec": 0, 00:08:52.653 "w_mbytes_per_sec": 0 00:08:52.653 }, 00:08:52.653 "claimed": false, 00:08:52.653 "zoned": false, 00:08:52.653 "supported_io_types": { 00:08:52.653 "read": true, 00:08:52.653 "write": true, 00:08:52.653 "unmap": true, 00:08:52.653 "flush": true, 00:08:52.653 "reset": true, 00:08:52.653 "nvme_admin": false, 00:08:52.653 "nvme_io": false, 00:08:52.653 "nvme_io_md": false, 00:08:52.653 "write_zeroes": true, 00:08:52.653 "zcopy": true, 00:08:52.653 "get_zone_info": false, 00:08:52.653 "zone_management": false, 00:08:52.653 "zone_append": false, 00:08:52.653 "compare": false, 00:08:52.653 "compare_and_write": false, 00:08:52.653 "abort": true, 00:08:52.653 "seek_hole": false, 00:08:52.653 "seek_data": false, 00:08:52.653 "copy": true, 00:08:52.653 "nvme_iov_md": false 00:08:52.653 }, 00:08:52.653 "memory_domains": [ 00:08:52.653 { 00:08:52.653 "dma_device_id": "system", 00:08:52.653 "dma_device_type": 1 00:08:52.653 }, 00:08:52.653 { 00:08:52.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.653 "dma_device_type": 2 00:08:52.653 } 00:08:52.653 ], 00:08:52.654 "driver_specific": {} 00:08:52.654 } 00:08:52.654 ] 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.654 [2024-11-15 09:27:40.910815] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:52.654 [2024-11-15 09:27:40.911001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:52.654 [2024-11-15 09:27:40.911062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:52.654 [2024-11-15 09:27:40.913339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.654 "name": "Existed_Raid", 00:08:52.654 "uuid": "4f578517-bbbf-4a50-a680-b0f4d5e75034", 00:08:52.654 "strip_size_kb": 64, 00:08:52.654 "state": "configuring", 00:08:52.654 "raid_level": "raid0", 00:08:52.654 "superblock": true, 00:08:52.654 "num_base_bdevs": 3, 00:08:52.654 "num_base_bdevs_discovered": 2, 00:08:52.654 "num_base_bdevs_operational": 3, 00:08:52.654 "base_bdevs_list": [ 00:08:52.654 { 00:08:52.654 "name": "BaseBdev1", 00:08:52.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.654 "is_configured": false, 00:08:52.654 "data_offset": 0, 00:08:52.654 "data_size": 0 00:08:52.654 }, 00:08:52.654 { 00:08:52.654 "name": "BaseBdev2", 00:08:52.654 "uuid": "56b25f24-5b36-408d-ba74-97ad8d102013", 00:08:52.654 "is_configured": true, 00:08:52.654 "data_offset": 2048, 00:08:52.654 "data_size": 63488 00:08:52.654 }, 00:08:52.654 { 00:08:52.654 "name": "BaseBdev3", 00:08:52.654 "uuid": "9a4a819e-99ab-4e40-b9a5-90f134a4103e", 00:08:52.654 "is_configured": true, 00:08:52.654 "data_offset": 2048, 00:08:52.654 "data_size": 63488 00:08:52.654 } 00:08:52.654 ] 00:08:52.654 }' 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.654 09:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.221 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:53.221 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.221 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.221 [2024-11-15 09:27:41.405972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:53.221 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.221 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.221 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.221 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.221 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.221 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.221 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.221 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.221 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.221 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.221 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.221 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.221 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.221 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.221 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.221 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.221 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.221 "name": "Existed_Raid", 00:08:53.221 "uuid": "4f578517-bbbf-4a50-a680-b0f4d5e75034", 00:08:53.221 "strip_size_kb": 64, 00:08:53.221 "state": "configuring", 00:08:53.221 "raid_level": "raid0", 00:08:53.221 "superblock": true, 00:08:53.221 "num_base_bdevs": 3, 00:08:53.221 "num_base_bdevs_discovered": 1, 00:08:53.221 "num_base_bdevs_operational": 3, 00:08:53.221 "base_bdevs_list": [ 00:08:53.221 { 00:08:53.221 "name": "BaseBdev1", 00:08:53.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.221 "is_configured": false, 00:08:53.221 "data_offset": 0, 00:08:53.221 "data_size": 0 00:08:53.221 }, 00:08:53.221 { 00:08:53.221 "name": null, 00:08:53.221 "uuid": "56b25f24-5b36-408d-ba74-97ad8d102013", 00:08:53.221 "is_configured": false, 00:08:53.221 "data_offset": 0, 00:08:53.221 "data_size": 63488 00:08:53.221 }, 00:08:53.221 { 00:08:53.221 "name": "BaseBdev3", 00:08:53.221 "uuid": "9a4a819e-99ab-4e40-b9a5-90f134a4103e", 00:08:53.221 "is_configured": true, 00:08:53.221 "data_offset": 2048, 00:08:53.221 "data_size": 63488 00:08:53.221 } 00:08:53.221 ] 00:08:53.221 }' 00:08:53.221 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.221 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.480 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.480 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:53.480 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.480 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.480 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.480 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:53.480 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:53.480 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.480 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.480 [2024-11-15 09:27:41.943094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.740 BaseBdev1 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.740 [ 00:08:53.740 { 00:08:53.740 "name": "BaseBdev1", 00:08:53.740 "aliases": [ 00:08:53.740 "e2a3b568-b740-4d40-a336-3eab414af68c" 00:08:53.740 ], 00:08:53.740 "product_name": "Malloc disk", 00:08:53.740 "block_size": 512, 00:08:53.740 "num_blocks": 65536, 00:08:53.740 "uuid": "e2a3b568-b740-4d40-a336-3eab414af68c", 00:08:53.740 "assigned_rate_limits": { 00:08:53.740 "rw_ios_per_sec": 0, 00:08:53.740 "rw_mbytes_per_sec": 0, 00:08:53.740 "r_mbytes_per_sec": 0, 00:08:53.740 "w_mbytes_per_sec": 0 00:08:53.740 }, 00:08:53.740 "claimed": true, 00:08:53.740 "claim_type": "exclusive_write", 00:08:53.740 "zoned": false, 00:08:53.740 "supported_io_types": { 00:08:53.740 "read": true, 00:08:53.740 "write": true, 00:08:53.740 "unmap": true, 00:08:53.740 "flush": true, 00:08:53.740 "reset": true, 00:08:53.740 "nvme_admin": false, 00:08:53.740 "nvme_io": false, 00:08:53.740 "nvme_io_md": false, 00:08:53.740 "write_zeroes": true, 00:08:53.740 "zcopy": true, 00:08:53.740 "get_zone_info": false, 00:08:53.740 "zone_management": false, 00:08:53.740 "zone_append": false, 00:08:53.740 "compare": false, 00:08:53.740 "compare_and_write": false, 00:08:53.740 "abort": true, 00:08:53.740 "seek_hole": false, 00:08:53.740 "seek_data": false, 00:08:53.740 "copy": true, 00:08:53.740 "nvme_iov_md": false 00:08:53.740 }, 00:08:53.740 "memory_domains": [ 00:08:53.740 { 00:08:53.740 "dma_device_id": "system", 00:08:53.740 "dma_device_type": 1 00:08:53.740 }, 00:08:53.740 { 00:08:53.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.740 "dma_device_type": 2 00:08:53.740 } 00:08:53.740 ], 00:08:53.740 "driver_specific": {} 00:08:53.740 } 00:08:53.740 ] 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.740 09:27:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.740 09:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.741 09:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.741 "name": "Existed_Raid", 00:08:53.741 "uuid": "4f578517-bbbf-4a50-a680-b0f4d5e75034", 00:08:53.741 "strip_size_kb": 64, 00:08:53.741 "state": "configuring", 00:08:53.741 "raid_level": "raid0", 00:08:53.741 "superblock": true, 00:08:53.741 "num_base_bdevs": 3, 00:08:53.741 "num_base_bdevs_discovered": 2, 00:08:53.741 "num_base_bdevs_operational": 3, 00:08:53.741 "base_bdevs_list": [ 00:08:53.741 { 00:08:53.741 "name": "BaseBdev1", 00:08:53.741 "uuid": "e2a3b568-b740-4d40-a336-3eab414af68c", 00:08:53.741 "is_configured": true, 00:08:53.741 "data_offset": 2048, 00:08:53.741 "data_size": 63488 00:08:53.741 }, 00:08:53.741 { 00:08:53.741 "name": null, 00:08:53.741 "uuid": "56b25f24-5b36-408d-ba74-97ad8d102013", 00:08:53.741 "is_configured": false, 00:08:53.741 "data_offset": 0, 00:08:53.741 "data_size": 63488 00:08:53.741 }, 00:08:53.741 { 00:08:53.741 "name": "BaseBdev3", 00:08:53.741 "uuid": "9a4a819e-99ab-4e40-b9a5-90f134a4103e", 00:08:53.741 "is_configured": true, 00:08:53.741 "data_offset": 2048, 00:08:53.741 "data_size": 63488 00:08:53.741 } 00:08:53.741 ] 00:08:53.741 }' 00:08:53.741 09:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.741 09:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.354 09:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.354 09:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.354 09:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:54.354 09:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.354 09:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.355 09:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:54.355 09:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:54.355 09:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.355 09:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.355 [2024-11-15 09:27:42.574114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:54.355 09:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.355 09:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:54.355 09:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.355 09:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.355 09:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.355 09:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.355 09:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.355 09:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.355 09:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.355 09:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.355 09:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.355 09:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.355 09:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.355 09:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.355 09:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.355 09:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.355 09:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.355 "name": "Existed_Raid", 00:08:54.355 "uuid": "4f578517-bbbf-4a50-a680-b0f4d5e75034", 00:08:54.355 "strip_size_kb": 64, 00:08:54.355 "state": "configuring", 00:08:54.355 "raid_level": "raid0", 00:08:54.355 "superblock": true, 00:08:54.355 "num_base_bdevs": 3, 00:08:54.355 "num_base_bdevs_discovered": 1, 00:08:54.355 "num_base_bdevs_operational": 3, 00:08:54.355 "base_bdevs_list": [ 00:08:54.355 { 00:08:54.355 "name": "BaseBdev1", 00:08:54.355 "uuid": "e2a3b568-b740-4d40-a336-3eab414af68c", 00:08:54.355 "is_configured": true, 00:08:54.355 "data_offset": 2048, 00:08:54.355 "data_size": 63488 00:08:54.355 }, 00:08:54.355 { 00:08:54.355 "name": null, 00:08:54.355 "uuid": "56b25f24-5b36-408d-ba74-97ad8d102013", 00:08:54.355 "is_configured": false, 00:08:54.355 "data_offset": 0, 00:08:54.355 "data_size": 63488 00:08:54.355 }, 00:08:54.355 { 00:08:54.355 "name": null, 00:08:54.355 "uuid": "9a4a819e-99ab-4e40-b9a5-90f134a4103e", 00:08:54.355 "is_configured": false, 00:08:54.355 "data_offset": 0, 00:08:54.355 "data_size": 63488 00:08:54.355 } 00:08:54.355 ] 00:08:54.355 }' 00:08:54.355 09:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.355 09:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.614 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:54.614 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.614 09:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.614 09:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.614 09:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.872 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:54.872 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:54.872 09:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.872 09:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.872 [2024-11-15 09:27:43.105279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:54.872 09:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.872 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:54.872 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.872 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.872 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.872 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.872 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.872 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.872 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.872 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.872 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.872 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.872 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.872 09:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.872 09:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.872 09:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.872 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.872 "name": "Existed_Raid", 00:08:54.872 "uuid": "4f578517-bbbf-4a50-a680-b0f4d5e75034", 00:08:54.872 "strip_size_kb": 64, 00:08:54.872 "state": "configuring", 00:08:54.872 "raid_level": "raid0", 00:08:54.872 "superblock": true, 00:08:54.872 "num_base_bdevs": 3, 00:08:54.872 "num_base_bdevs_discovered": 2, 00:08:54.872 "num_base_bdevs_operational": 3, 00:08:54.872 "base_bdevs_list": [ 00:08:54.872 { 00:08:54.872 "name": "BaseBdev1", 00:08:54.872 "uuid": "e2a3b568-b740-4d40-a336-3eab414af68c", 00:08:54.872 "is_configured": true, 00:08:54.872 "data_offset": 2048, 00:08:54.872 "data_size": 63488 00:08:54.872 }, 00:08:54.872 { 00:08:54.872 "name": null, 00:08:54.872 "uuid": "56b25f24-5b36-408d-ba74-97ad8d102013", 00:08:54.872 "is_configured": false, 00:08:54.872 "data_offset": 0, 00:08:54.872 "data_size": 63488 00:08:54.872 }, 00:08:54.872 { 00:08:54.872 "name": "BaseBdev3", 00:08:54.872 "uuid": "9a4a819e-99ab-4e40-b9a5-90f134a4103e", 00:08:54.872 "is_configured": true, 00:08:54.872 "data_offset": 2048, 00:08:54.872 "data_size": 63488 00:08:54.872 } 00:08:54.872 ] 00:08:54.872 }' 00:08:54.872 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.872 09:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.131 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.131 09:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.131 09:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.131 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:55.131 09:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.131 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:55.131 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:55.131 09:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.131 09:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.131 [2024-11-15 09:27:43.588503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:55.390 09:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.390 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:55.390 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.390 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.390 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.390 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.390 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.390 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.390 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.390 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.390 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.390 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.390 09:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.390 09:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.390 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.390 09:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.390 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.390 "name": "Existed_Raid", 00:08:55.390 "uuid": "4f578517-bbbf-4a50-a680-b0f4d5e75034", 00:08:55.390 "strip_size_kb": 64, 00:08:55.390 "state": "configuring", 00:08:55.390 "raid_level": "raid0", 00:08:55.390 "superblock": true, 00:08:55.390 "num_base_bdevs": 3, 00:08:55.390 "num_base_bdevs_discovered": 1, 00:08:55.390 "num_base_bdevs_operational": 3, 00:08:55.390 "base_bdevs_list": [ 00:08:55.390 { 00:08:55.390 "name": null, 00:08:55.390 "uuid": "e2a3b568-b740-4d40-a336-3eab414af68c", 00:08:55.390 "is_configured": false, 00:08:55.390 "data_offset": 0, 00:08:55.390 "data_size": 63488 00:08:55.390 }, 00:08:55.390 { 00:08:55.390 "name": null, 00:08:55.390 "uuid": "56b25f24-5b36-408d-ba74-97ad8d102013", 00:08:55.390 "is_configured": false, 00:08:55.390 "data_offset": 0, 00:08:55.390 "data_size": 63488 00:08:55.390 }, 00:08:55.390 { 00:08:55.390 "name": "BaseBdev3", 00:08:55.390 "uuid": "9a4a819e-99ab-4e40-b9a5-90f134a4103e", 00:08:55.390 "is_configured": true, 00:08:55.390 "data_offset": 2048, 00:08:55.390 "data_size": 63488 00:08:55.390 } 00:08:55.390 ] 00:08:55.390 }' 00:08:55.390 09:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.390 09:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.957 [2024-11-15 09:27:44.193588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.957 "name": "Existed_Raid", 00:08:55.957 "uuid": "4f578517-bbbf-4a50-a680-b0f4d5e75034", 00:08:55.957 "strip_size_kb": 64, 00:08:55.957 "state": "configuring", 00:08:55.957 "raid_level": "raid0", 00:08:55.957 "superblock": true, 00:08:55.957 "num_base_bdevs": 3, 00:08:55.957 "num_base_bdevs_discovered": 2, 00:08:55.957 "num_base_bdevs_operational": 3, 00:08:55.957 "base_bdevs_list": [ 00:08:55.957 { 00:08:55.957 "name": null, 00:08:55.957 "uuid": "e2a3b568-b740-4d40-a336-3eab414af68c", 00:08:55.957 "is_configured": false, 00:08:55.957 "data_offset": 0, 00:08:55.957 "data_size": 63488 00:08:55.957 }, 00:08:55.957 { 00:08:55.957 "name": "BaseBdev2", 00:08:55.957 "uuid": "56b25f24-5b36-408d-ba74-97ad8d102013", 00:08:55.957 "is_configured": true, 00:08:55.957 "data_offset": 2048, 00:08:55.957 "data_size": 63488 00:08:55.957 }, 00:08:55.957 { 00:08:55.957 "name": "BaseBdev3", 00:08:55.957 "uuid": "9a4a819e-99ab-4e40-b9a5-90f134a4103e", 00:08:55.957 "is_configured": true, 00:08:55.957 "data_offset": 2048, 00:08:55.957 "data_size": 63488 00:08:55.957 } 00:08:55.957 ] 00:08:55.957 }' 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.957 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e2a3b568-b740-4d40-a336-3eab414af68c 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.526 [2024-11-15 09:27:44.818744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:56.526 [2024-11-15 09:27:44.819193] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:56.526 [2024-11-15 09:27:44.819222] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:56.526 [2024-11-15 09:27:44.819539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:56.526 [2024-11-15 09:27:44.819731] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:56.526 [2024-11-15 09:27:44.819743] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:56.526 NewBaseBdev 00:08:56.526 [2024-11-15 09:27:44.819930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.526 [ 00:08:56.526 { 00:08:56.526 "name": "NewBaseBdev", 00:08:56.526 "aliases": [ 00:08:56.526 "e2a3b568-b740-4d40-a336-3eab414af68c" 00:08:56.526 ], 00:08:56.526 "product_name": "Malloc disk", 00:08:56.526 "block_size": 512, 00:08:56.526 "num_blocks": 65536, 00:08:56.526 "uuid": "e2a3b568-b740-4d40-a336-3eab414af68c", 00:08:56.526 "assigned_rate_limits": { 00:08:56.526 "rw_ios_per_sec": 0, 00:08:56.526 "rw_mbytes_per_sec": 0, 00:08:56.526 "r_mbytes_per_sec": 0, 00:08:56.526 "w_mbytes_per_sec": 0 00:08:56.526 }, 00:08:56.526 "claimed": true, 00:08:56.526 "claim_type": "exclusive_write", 00:08:56.526 "zoned": false, 00:08:56.526 "supported_io_types": { 00:08:56.526 "read": true, 00:08:56.526 "write": true, 00:08:56.526 "unmap": true, 00:08:56.526 "flush": true, 00:08:56.526 "reset": true, 00:08:56.526 "nvme_admin": false, 00:08:56.526 "nvme_io": false, 00:08:56.526 "nvme_io_md": false, 00:08:56.526 "write_zeroes": true, 00:08:56.526 "zcopy": true, 00:08:56.526 "get_zone_info": false, 00:08:56.526 "zone_management": false, 00:08:56.526 "zone_append": false, 00:08:56.526 "compare": false, 00:08:56.526 "compare_and_write": false, 00:08:56.526 "abort": true, 00:08:56.526 "seek_hole": false, 00:08:56.526 "seek_data": false, 00:08:56.526 "copy": true, 00:08:56.526 "nvme_iov_md": false 00:08:56.526 }, 00:08:56.526 "memory_domains": [ 00:08:56.526 { 00:08:56.526 "dma_device_id": "system", 00:08:56.526 "dma_device_type": 1 00:08:56.526 }, 00:08:56.526 { 00:08:56.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.526 "dma_device_type": 2 00:08:56.526 } 00:08:56.526 ], 00:08:56.526 "driver_specific": {} 00:08:56.526 } 00:08:56.526 ] 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.526 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.526 "name": "Existed_Raid", 00:08:56.526 "uuid": "4f578517-bbbf-4a50-a680-b0f4d5e75034", 00:08:56.526 "strip_size_kb": 64, 00:08:56.526 "state": "online", 00:08:56.526 "raid_level": "raid0", 00:08:56.526 "superblock": true, 00:08:56.526 "num_base_bdevs": 3, 00:08:56.526 "num_base_bdevs_discovered": 3, 00:08:56.526 "num_base_bdevs_operational": 3, 00:08:56.526 "base_bdevs_list": [ 00:08:56.526 { 00:08:56.526 "name": "NewBaseBdev", 00:08:56.526 "uuid": "e2a3b568-b740-4d40-a336-3eab414af68c", 00:08:56.526 "is_configured": true, 00:08:56.526 "data_offset": 2048, 00:08:56.526 "data_size": 63488 00:08:56.526 }, 00:08:56.526 { 00:08:56.526 "name": "BaseBdev2", 00:08:56.526 "uuid": "56b25f24-5b36-408d-ba74-97ad8d102013", 00:08:56.526 "is_configured": true, 00:08:56.526 "data_offset": 2048, 00:08:56.526 "data_size": 63488 00:08:56.526 }, 00:08:56.526 { 00:08:56.526 "name": "BaseBdev3", 00:08:56.526 "uuid": "9a4a819e-99ab-4e40-b9a5-90f134a4103e", 00:08:56.526 "is_configured": true, 00:08:56.526 "data_offset": 2048, 00:08:56.526 "data_size": 63488 00:08:56.526 } 00:08:56.526 ] 00:08:56.527 }' 00:08:56.527 09:27:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.527 09:27:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.094 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:57.094 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:57.094 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:57.094 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:57.094 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:57.094 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:57.094 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:57.094 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:57.094 09:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.094 09:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.094 [2024-11-15 09:27:45.326359] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.094 09:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.094 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:57.094 "name": "Existed_Raid", 00:08:57.094 "aliases": [ 00:08:57.094 "4f578517-bbbf-4a50-a680-b0f4d5e75034" 00:08:57.094 ], 00:08:57.094 "product_name": "Raid Volume", 00:08:57.094 "block_size": 512, 00:08:57.094 "num_blocks": 190464, 00:08:57.094 "uuid": "4f578517-bbbf-4a50-a680-b0f4d5e75034", 00:08:57.094 "assigned_rate_limits": { 00:08:57.094 "rw_ios_per_sec": 0, 00:08:57.094 "rw_mbytes_per_sec": 0, 00:08:57.094 "r_mbytes_per_sec": 0, 00:08:57.094 "w_mbytes_per_sec": 0 00:08:57.094 }, 00:08:57.094 "claimed": false, 00:08:57.094 "zoned": false, 00:08:57.094 "supported_io_types": { 00:08:57.094 "read": true, 00:08:57.094 "write": true, 00:08:57.094 "unmap": true, 00:08:57.094 "flush": true, 00:08:57.094 "reset": true, 00:08:57.094 "nvme_admin": false, 00:08:57.094 "nvme_io": false, 00:08:57.094 "nvme_io_md": false, 00:08:57.094 "write_zeroes": true, 00:08:57.094 "zcopy": false, 00:08:57.094 "get_zone_info": false, 00:08:57.094 "zone_management": false, 00:08:57.094 "zone_append": false, 00:08:57.094 "compare": false, 00:08:57.094 "compare_and_write": false, 00:08:57.094 "abort": false, 00:08:57.094 "seek_hole": false, 00:08:57.094 "seek_data": false, 00:08:57.094 "copy": false, 00:08:57.094 "nvme_iov_md": false 00:08:57.094 }, 00:08:57.094 "memory_domains": [ 00:08:57.094 { 00:08:57.094 "dma_device_id": "system", 00:08:57.094 "dma_device_type": 1 00:08:57.094 }, 00:08:57.094 { 00:08:57.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.094 "dma_device_type": 2 00:08:57.094 }, 00:08:57.094 { 00:08:57.094 "dma_device_id": "system", 00:08:57.094 "dma_device_type": 1 00:08:57.094 }, 00:08:57.094 { 00:08:57.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.094 "dma_device_type": 2 00:08:57.094 }, 00:08:57.094 { 00:08:57.094 "dma_device_id": "system", 00:08:57.094 "dma_device_type": 1 00:08:57.094 }, 00:08:57.094 { 00:08:57.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.094 "dma_device_type": 2 00:08:57.094 } 00:08:57.094 ], 00:08:57.094 "driver_specific": { 00:08:57.094 "raid": { 00:08:57.094 "uuid": "4f578517-bbbf-4a50-a680-b0f4d5e75034", 00:08:57.094 "strip_size_kb": 64, 00:08:57.094 "state": "online", 00:08:57.094 "raid_level": "raid0", 00:08:57.094 "superblock": true, 00:08:57.094 "num_base_bdevs": 3, 00:08:57.094 "num_base_bdevs_discovered": 3, 00:08:57.094 "num_base_bdevs_operational": 3, 00:08:57.094 "base_bdevs_list": [ 00:08:57.094 { 00:08:57.094 "name": "NewBaseBdev", 00:08:57.094 "uuid": "e2a3b568-b740-4d40-a336-3eab414af68c", 00:08:57.094 "is_configured": true, 00:08:57.094 "data_offset": 2048, 00:08:57.094 "data_size": 63488 00:08:57.094 }, 00:08:57.094 { 00:08:57.094 "name": "BaseBdev2", 00:08:57.094 "uuid": "56b25f24-5b36-408d-ba74-97ad8d102013", 00:08:57.094 "is_configured": true, 00:08:57.094 "data_offset": 2048, 00:08:57.094 "data_size": 63488 00:08:57.094 }, 00:08:57.094 { 00:08:57.094 "name": "BaseBdev3", 00:08:57.094 "uuid": "9a4a819e-99ab-4e40-b9a5-90f134a4103e", 00:08:57.094 "is_configured": true, 00:08:57.094 "data_offset": 2048, 00:08:57.094 "data_size": 63488 00:08:57.094 } 00:08:57.094 ] 00:08:57.094 } 00:08:57.094 } 00:08:57.094 }' 00:08:57.095 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:57.095 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:57.095 BaseBdev2 00:08:57.095 BaseBdev3' 00:08:57.095 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.095 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:57.095 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.095 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:57.095 09:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.095 09:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.095 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.095 09:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.095 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.095 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.095 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.095 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:57.095 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.095 09:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.095 09:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.095 09:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.377 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.377 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.377 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.377 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:57.377 09:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.377 09:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.377 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.377 09:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.378 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.378 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.378 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:57.378 09:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.378 09:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.378 [2024-11-15 09:27:45.613556] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:57.378 [2024-11-15 09:27:45.613611] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:57.378 [2024-11-15 09:27:45.613738] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:57.378 [2024-11-15 09:27:45.613828] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:57.378 [2024-11-15 09:27:45.613875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:57.378 09:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.378 09:27:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64739 00:08:57.378 09:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 64739 ']' 00:08:57.378 09:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 64739 00:08:57.378 09:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:08:57.378 09:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:57.378 09:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64739 00:08:57.378 killing process with pid 64739 00:08:57.378 09:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:57.378 09:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:57.378 09:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64739' 00:08:57.378 09:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 64739 00:08:57.378 [2024-11-15 09:27:45.660395] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:57.378 09:27:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 64739 00:08:57.636 [2024-11-15 09:27:46.019233] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:59.013 09:27:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:59.013 00:08:59.013 real 0m11.600s 00:08:59.013 user 0m18.331s 00:08:59.013 sys 0m2.030s 00:08:59.013 ************************************ 00:08:59.013 END TEST raid_state_function_test_sb 00:08:59.013 ************************************ 00:08:59.013 09:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:59.013 09:27:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.013 09:27:47 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:59.013 09:27:47 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:59.013 09:27:47 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:59.013 09:27:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:59.013 ************************************ 00:08:59.013 START TEST raid_superblock_test 00:08:59.013 ************************************ 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 3 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65370 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65370 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 65370 ']' 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:59.013 09:27:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.272 [2024-11-15 09:27:47.545878] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:08:59.272 [2024-11-15 09:27:47.546201] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65370 ] 00:08:59.272 [2024-11-15 09:27:47.730575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.531 [2024-11-15 09:27:47.872278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.789 [2024-11-15 09:27:48.120167] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.789 [2024-11-15 09:27:48.120232] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.051 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:00.051 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:09:00.051 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:00.051 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:00.051 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:00.051 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:00.051 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:00.051 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:00.051 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:00.051 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:00.051 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:00.051 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.051 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.051 malloc1 00:09:00.051 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.051 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:00.051 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.051 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.310 [2024-11-15 09:27:48.519604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:00.310 [2024-11-15 09:27:48.519814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.310 [2024-11-15 09:27:48.519893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:00.310 [2024-11-15 09:27:48.519942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.310 [2024-11-15 09:27:48.522577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.310 [2024-11-15 09:27:48.522679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:00.310 pt1 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.310 malloc2 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.310 [2024-11-15 09:27:48.586486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:00.310 [2024-11-15 09:27:48.586573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.310 [2024-11-15 09:27:48.586601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:00.310 [2024-11-15 09:27:48.586611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.310 [2024-11-15 09:27:48.589214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.310 [2024-11-15 09:27:48.589269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:00.310 pt2 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.310 malloc3 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.310 [2024-11-15 09:27:48.662550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:00.310 [2024-11-15 09:27:48.662735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.310 [2024-11-15 09:27:48.662780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:00.310 [2024-11-15 09:27:48.662836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.310 [2024-11-15 09:27:48.665573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.310 [2024-11-15 09:27:48.665709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:00.310 pt3 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.310 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:00.311 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:00.311 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:00.311 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.311 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.311 [2024-11-15 09:27:48.678654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:00.311 [2024-11-15 09:27:48.681033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:00.311 [2024-11-15 09:27:48.681179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:00.311 [2024-11-15 09:27:48.681436] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:00.311 [2024-11-15 09:27:48.681498] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:00.311 [2024-11-15 09:27:48.681873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:00.311 [2024-11-15 09:27:48.682129] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:00.311 [2024-11-15 09:27:48.682180] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:00.311 [2024-11-15 09:27:48.682502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.311 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.311 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:00.311 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.311 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.311 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.311 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.311 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.311 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.311 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.311 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.311 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.311 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.311 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.311 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.311 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.311 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.311 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.311 "name": "raid_bdev1", 00:09:00.311 "uuid": "9d543cc6-a35c-42fd-9998-55bfc9abbe3d", 00:09:00.311 "strip_size_kb": 64, 00:09:00.311 "state": "online", 00:09:00.311 "raid_level": "raid0", 00:09:00.311 "superblock": true, 00:09:00.311 "num_base_bdevs": 3, 00:09:00.311 "num_base_bdevs_discovered": 3, 00:09:00.311 "num_base_bdevs_operational": 3, 00:09:00.311 "base_bdevs_list": [ 00:09:00.311 { 00:09:00.311 "name": "pt1", 00:09:00.311 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.311 "is_configured": true, 00:09:00.311 "data_offset": 2048, 00:09:00.311 "data_size": 63488 00:09:00.311 }, 00:09:00.311 { 00:09:00.311 "name": "pt2", 00:09:00.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.311 "is_configured": true, 00:09:00.311 "data_offset": 2048, 00:09:00.311 "data_size": 63488 00:09:00.311 }, 00:09:00.311 { 00:09:00.311 "name": "pt3", 00:09:00.311 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:00.311 "is_configured": true, 00:09:00.311 "data_offset": 2048, 00:09:00.311 "data_size": 63488 00:09:00.311 } 00:09:00.311 ] 00:09:00.311 }' 00:09:00.311 09:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.311 09:27:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.877 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:00.877 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:00.877 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:00.877 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:00.877 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:00.877 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:00.877 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:00.877 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:00.877 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.877 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.878 [2024-11-15 09:27:49.126310] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.878 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.878 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:00.878 "name": "raid_bdev1", 00:09:00.878 "aliases": [ 00:09:00.878 "9d543cc6-a35c-42fd-9998-55bfc9abbe3d" 00:09:00.878 ], 00:09:00.878 "product_name": "Raid Volume", 00:09:00.878 "block_size": 512, 00:09:00.878 "num_blocks": 190464, 00:09:00.878 "uuid": "9d543cc6-a35c-42fd-9998-55bfc9abbe3d", 00:09:00.878 "assigned_rate_limits": { 00:09:00.878 "rw_ios_per_sec": 0, 00:09:00.878 "rw_mbytes_per_sec": 0, 00:09:00.878 "r_mbytes_per_sec": 0, 00:09:00.878 "w_mbytes_per_sec": 0 00:09:00.878 }, 00:09:00.878 "claimed": false, 00:09:00.878 "zoned": false, 00:09:00.878 "supported_io_types": { 00:09:00.878 "read": true, 00:09:00.878 "write": true, 00:09:00.878 "unmap": true, 00:09:00.878 "flush": true, 00:09:00.878 "reset": true, 00:09:00.878 "nvme_admin": false, 00:09:00.878 "nvme_io": false, 00:09:00.878 "nvme_io_md": false, 00:09:00.878 "write_zeroes": true, 00:09:00.878 "zcopy": false, 00:09:00.878 "get_zone_info": false, 00:09:00.878 "zone_management": false, 00:09:00.878 "zone_append": false, 00:09:00.878 "compare": false, 00:09:00.878 "compare_and_write": false, 00:09:00.878 "abort": false, 00:09:00.878 "seek_hole": false, 00:09:00.878 "seek_data": false, 00:09:00.878 "copy": false, 00:09:00.878 "nvme_iov_md": false 00:09:00.878 }, 00:09:00.878 "memory_domains": [ 00:09:00.878 { 00:09:00.878 "dma_device_id": "system", 00:09:00.878 "dma_device_type": 1 00:09:00.878 }, 00:09:00.878 { 00:09:00.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.878 "dma_device_type": 2 00:09:00.878 }, 00:09:00.878 { 00:09:00.878 "dma_device_id": "system", 00:09:00.878 "dma_device_type": 1 00:09:00.878 }, 00:09:00.878 { 00:09:00.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.878 "dma_device_type": 2 00:09:00.878 }, 00:09:00.878 { 00:09:00.878 "dma_device_id": "system", 00:09:00.878 "dma_device_type": 1 00:09:00.878 }, 00:09:00.878 { 00:09:00.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.878 "dma_device_type": 2 00:09:00.878 } 00:09:00.878 ], 00:09:00.878 "driver_specific": { 00:09:00.878 "raid": { 00:09:00.878 "uuid": "9d543cc6-a35c-42fd-9998-55bfc9abbe3d", 00:09:00.878 "strip_size_kb": 64, 00:09:00.878 "state": "online", 00:09:00.878 "raid_level": "raid0", 00:09:00.878 "superblock": true, 00:09:00.878 "num_base_bdevs": 3, 00:09:00.878 "num_base_bdevs_discovered": 3, 00:09:00.878 "num_base_bdevs_operational": 3, 00:09:00.878 "base_bdevs_list": [ 00:09:00.878 { 00:09:00.878 "name": "pt1", 00:09:00.878 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.878 "is_configured": true, 00:09:00.878 "data_offset": 2048, 00:09:00.878 "data_size": 63488 00:09:00.878 }, 00:09:00.878 { 00:09:00.878 "name": "pt2", 00:09:00.878 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.878 "is_configured": true, 00:09:00.878 "data_offset": 2048, 00:09:00.878 "data_size": 63488 00:09:00.878 }, 00:09:00.878 { 00:09:00.878 "name": "pt3", 00:09:00.878 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:00.878 "is_configured": true, 00:09:00.878 "data_offset": 2048, 00:09:00.878 "data_size": 63488 00:09:00.878 } 00:09:00.878 ] 00:09:00.878 } 00:09:00.878 } 00:09:00.878 }' 00:09:00.878 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:00.878 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:00.878 pt2 00:09:00.878 pt3' 00:09:00.878 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.878 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:00.878 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.878 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:00.878 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.878 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.878 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.878 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.878 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.878 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.878 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.878 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:00.878 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.878 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.878 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.878 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.138 [2024-11-15 09:27:49.425733] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9d543cc6-a35c-42fd-9998-55bfc9abbe3d 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9d543cc6-a35c-42fd-9998-55bfc9abbe3d ']' 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.138 [2024-11-15 09:27:49.473353] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:01.138 [2024-11-15 09:27:49.473502] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.138 [2024-11-15 09:27:49.473619] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.138 [2024-11-15 09:27:49.473691] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.138 [2024-11-15 09:27:49.473703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.138 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.139 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:01.139 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:01.139 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.139 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.139 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.398 [2024-11-15 09:27:49.621183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:01.398 [2024-11-15 09:27:49.623506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:01.398 [2024-11-15 09:27:49.623581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:01.398 [2024-11-15 09:27:49.623647] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:01.398 [2024-11-15 09:27:49.623713] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:01.398 [2024-11-15 09:27:49.623735] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:01.398 [2024-11-15 09:27:49.623754] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:01.398 [2024-11-15 09:27:49.623767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:01.398 request: 00:09:01.398 { 00:09:01.398 "name": "raid_bdev1", 00:09:01.398 "raid_level": "raid0", 00:09:01.398 "base_bdevs": [ 00:09:01.398 "malloc1", 00:09:01.398 "malloc2", 00:09:01.398 "malloc3" 00:09:01.398 ], 00:09:01.398 "strip_size_kb": 64, 00:09:01.398 "superblock": false, 00:09:01.398 "method": "bdev_raid_create", 00:09:01.398 "req_id": 1 00:09:01.398 } 00:09:01.398 Got JSON-RPC error response 00:09:01.398 response: 00:09:01.398 { 00:09:01.398 "code": -17, 00:09:01.398 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:01.398 } 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.398 [2024-11-15 09:27:49.689034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:01.398 [2024-11-15 09:27:49.689231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.398 [2024-11-15 09:27:49.689296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:01.398 [2024-11-15 09:27:49.689332] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.398 [2024-11-15 09:27:49.691967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.398 [2024-11-15 09:27:49.692078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:01.398 [2024-11-15 09:27:49.692326] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:01.398 [2024-11-15 09:27:49.692422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:01.398 pt1 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:01.398 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.399 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.399 "name": "raid_bdev1", 00:09:01.399 "uuid": "9d543cc6-a35c-42fd-9998-55bfc9abbe3d", 00:09:01.399 "strip_size_kb": 64, 00:09:01.399 "state": "configuring", 00:09:01.399 "raid_level": "raid0", 00:09:01.399 "superblock": true, 00:09:01.399 "num_base_bdevs": 3, 00:09:01.399 "num_base_bdevs_discovered": 1, 00:09:01.399 "num_base_bdevs_operational": 3, 00:09:01.399 "base_bdevs_list": [ 00:09:01.399 { 00:09:01.399 "name": "pt1", 00:09:01.399 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:01.399 "is_configured": true, 00:09:01.399 "data_offset": 2048, 00:09:01.399 "data_size": 63488 00:09:01.399 }, 00:09:01.399 { 00:09:01.399 "name": null, 00:09:01.399 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:01.399 "is_configured": false, 00:09:01.399 "data_offset": 2048, 00:09:01.399 "data_size": 63488 00:09:01.399 }, 00:09:01.399 { 00:09:01.399 "name": null, 00:09:01.399 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:01.399 "is_configured": false, 00:09:01.399 "data_offset": 2048, 00:09:01.399 "data_size": 63488 00:09:01.399 } 00:09:01.399 ] 00:09:01.399 }' 00:09:01.399 09:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.399 09:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.967 [2024-11-15 09:27:50.180322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:01.967 [2024-11-15 09:27:50.180421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.967 [2024-11-15 09:27:50.180450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:01.967 [2024-11-15 09:27:50.180462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.967 [2024-11-15 09:27:50.181035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.967 [2024-11-15 09:27:50.181166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:01.967 [2024-11-15 09:27:50.181284] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:01.967 [2024-11-15 09:27:50.181313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:01.967 pt2 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.967 [2024-11-15 09:27:50.192354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.967 "name": "raid_bdev1", 00:09:01.967 "uuid": "9d543cc6-a35c-42fd-9998-55bfc9abbe3d", 00:09:01.967 "strip_size_kb": 64, 00:09:01.967 "state": "configuring", 00:09:01.967 "raid_level": "raid0", 00:09:01.967 "superblock": true, 00:09:01.967 "num_base_bdevs": 3, 00:09:01.967 "num_base_bdevs_discovered": 1, 00:09:01.967 "num_base_bdevs_operational": 3, 00:09:01.967 "base_bdevs_list": [ 00:09:01.967 { 00:09:01.967 "name": "pt1", 00:09:01.967 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:01.967 "is_configured": true, 00:09:01.967 "data_offset": 2048, 00:09:01.967 "data_size": 63488 00:09:01.967 }, 00:09:01.967 { 00:09:01.967 "name": null, 00:09:01.967 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:01.967 "is_configured": false, 00:09:01.967 "data_offset": 0, 00:09:01.967 "data_size": 63488 00:09:01.967 }, 00:09:01.967 { 00:09:01.967 "name": null, 00:09:01.967 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:01.967 "is_configured": false, 00:09:01.967 "data_offset": 2048, 00:09:01.967 "data_size": 63488 00:09:01.967 } 00:09:01.967 ] 00:09:01.967 }' 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.967 09:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.227 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:02.227 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:02.227 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:02.227 09:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.227 09:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.227 [2024-11-15 09:27:50.667559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:02.227 [2024-11-15 09:27:50.667763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.227 [2024-11-15 09:27:50.667805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:02.227 [2024-11-15 09:27:50.667863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.227 [2024-11-15 09:27:50.668453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.227 [2024-11-15 09:27:50.668528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:02.227 [2024-11-15 09:27:50.668656] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:02.227 [2024-11-15 09:27:50.668719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:02.227 pt2 00:09:02.227 09:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.227 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:02.227 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:02.227 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:02.227 09:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.227 09:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.227 [2024-11-15 09:27:50.679519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:02.227 [2024-11-15 09:27:50.679591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.227 [2024-11-15 09:27:50.679609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:02.227 [2024-11-15 09:27:50.679622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.227 [2024-11-15 09:27:50.680123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.227 [2024-11-15 09:27:50.680161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:02.227 [2024-11-15 09:27:50.680247] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:02.227 [2024-11-15 09:27:50.680275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:02.227 [2024-11-15 09:27:50.680420] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:02.227 [2024-11-15 09:27:50.680433] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:02.227 [2024-11-15 09:27:50.680723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:02.227 [2024-11-15 09:27:50.680921] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:02.227 [2024-11-15 09:27:50.680931] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:02.227 [2024-11-15 09:27:50.681098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.227 pt3 00:09:02.227 09:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.227 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:02.227 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:02.227 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:02.227 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:02.227 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.227 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.227 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.227 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.228 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.228 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.228 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.228 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.228 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.526 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:02.526 09:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.526 09:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.526 09:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.526 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.526 "name": "raid_bdev1", 00:09:02.526 "uuid": "9d543cc6-a35c-42fd-9998-55bfc9abbe3d", 00:09:02.526 "strip_size_kb": 64, 00:09:02.526 "state": "online", 00:09:02.526 "raid_level": "raid0", 00:09:02.526 "superblock": true, 00:09:02.526 "num_base_bdevs": 3, 00:09:02.526 "num_base_bdevs_discovered": 3, 00:09:02.526 "num_base_bdevs_operational": 3, 00:09:02.526 "base_bdevs_list": [ 00:09:02.526 { 00:09:02.526 "name": "pt1", 00:09:02.526 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:02.526 "is_configured": true, 00:09:02.526 "data_offset": 2048, 00:09:02.526 "data_size": 63488 00:09:02.526 }, 00:09:02.526 { 00:09:02.526 "name": "pt2", 00:09:02.526 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:02.526 "is_configured": true, 00:09:02.526 "data_offset": 2048, 00:09:02.526 "data_size": 63488 00:09:02.526 }, 00:09:02.526 { 00:09:02.526 "name": "pt3", 00:09:02.526 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:02.526 "is_configured": true, 00:09:02.526 "data_offset": 2048, 00:09:02.526 "data_size": 63488 00:09:02.526 } 00:09:02.526 ] 00:09:02.526 }' 00:09:02.526 09:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.526 09:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.785 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:02.785 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:02.785 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:02.785 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:02.785 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:02.785 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:02.785 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:02.785 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:02.785 09:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.785 09:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.785 [2024-11-15 09:27:51.195104] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:02.785 09:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.785 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:02.785 "name": "raid_bdev1", 00:09:02.785 "aliases": [ 00:09:02.785 "9d543cc6-a35c-42fd-9998-55bfc9abbe3d" 00:09:02.785 ], 00:09:02.785 "product_name": "Raid Volume", 00:09:02.785 "block_size": 512, 00:09:02.785 "num_blocks": 190464, 00:09:02.785 "uuid": "9d543cc6-a35c-42fd-9998-55bfc9abbe3d", 00:09:02.785 "assigned_rate_limits": { 00:09:02.785 "rw_ios_per_sec": 0, 00:09:02.785 "rw_mbytes_per_sec": 0, 00:09:02.785 "r_mbytes_per_sec": 0, 00:09:02.785 "w_mbytes_per_sec": 0 00:09:02.785 }, 00:09:02.785 "claimed": false, 00:09:02.785 "zoned": false, 00:09:02.785 "supported_io_types": { 00:09:02.785 "read": true, 00:09:02.785 "write": true, 00:09:02.785 "unmap": true, 00:09:02.785 "flush": true, 00:09:02.785 "reset": true, 00:09:02.785 "nvme_admin": false, 00:09:02.785 "nvme_io": false, 00:09:02.785 "nvme_io_md": false, 00:09:02.785 "write_zeroes": true, 00:09:02.785 "zcopy": false, 00:09:02.785 "get_zone_info": false, 00:09:02.785 "zone_management": false, 00:09:02.785 "zone_append": false, 00:09:02.785 "compare": false, 00:09:02.785 "compare_and_write": false, 00:09:02.785 "abort": false, 00:09:02.785 "seek_hole": false, 00:09:02.785 "seek_data": false, 00:09:02.785 "copy": false, 00:09:02.785 "nvme_iov_md": false 00:09:02.785 }, 00:09:02.785 "memory_domains": [ 00:09:02.785 { 00:09:02.785 "dma_device_id": "system", 00:09:02.785 "dma_device_type": 1 00:09:02.785 }, 00:09:02.785 { 00:09:02.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.785 "dma_device_type": 2 00:09:02.785 }, 00:09:02.785 { 00:09:02.785 "dma_device_id": "system", 00:09:02.785 "dma_device_type": 1 00:09:02.785 }, 00:09:02.785 { 00:09:02.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.785 "dma_device_type": 2 00:09:02.785 }, 00:09:02.785 { 00:09:02.785 "dma_device_id": "system", 00:09:02.785 "dma_device_type": 1 00:09:02.785 }, 00:09:02.785 { 00:09:02.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.785 "dma_device_type": 2 00:09:02.785 } 00:09:02.785 ], 00:09:02.785 "driver_specific": { 00:09:02.785 "raid": { 00:09:02.785 "uuid": "9d543cc6-a35c-42fd-9998-55bfc9abbe3d", 00:09:02.785 "strip_size_kb": 64, 00:09:02.785 "state": "online", 00:09:02.785 "raid_level": "raid0", 00:09:02.785 "superblock": true, 00:09:02.785 "num_base_bdevs": 3, 00:09:02.785 "num_base_bdevs_discovered": 3, 00:09:02.785 "num_base_bdevs_operational": 3, 00:09:02.785 "base_bdevs_list": [ 00:09:02.785 { 00:09:02.785 "name": "pt1", 00:09:02.785 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:02.785 "is_configured": true, 00:09:02.785 "data_offset": 2048, 00:09:02.785 "data_size": 63488 00:09:02.785 }, 00:09:02.785 { 00:09:02.785 "name": "pt2", 00:09:02.785 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:02.785 "is_configured": true, 00:09:02.785 "data_offset": 2048, 00:09:02.785 "data_size": 63488 00:09:02.785 }, 00:09:02.785 { 00:09:02.785 "name": "pt3", 00:09:02.785 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:02.785 "is_configured": true, 00:09:02.785 "data_offset": 2048, 00:09:02.785 "data_size": 63488 00:09:02.785 } 00:09:02.785 ] 00:09:02.785 } 00:09:02.785 } 00:09:02.785 }' 00:09:02.785 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:03.045 pt2 00:09:03.045 pt3' 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:03.045 [2024-11-15 09:27:51.450593] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9d543cc6-a35c-42fd-9998-55bfc9abbe3d '!=' 9d543cc6-a35c-42fd-9998-55bfc9abbe3d ']' 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65370 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 65370 ']' 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 65370 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:03.045 09:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65370 00:09:03.304 09:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:03.304 09:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:03.304 09:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65370' 00:09:03.304 killing process with pid 65370 00:09:03.304 09:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 65370 00:09:03.304 [2024-11-15 09:27:51.524712] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:03.304 09:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 65370 00:09:03.304 [2024-11-15 09:27:51.524982] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.304 [2024-11-15 09:27:51.525122] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:03.304 [2024-11-15 09:27:51.525183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:03.562 [2024-11-15 09:27:51.880733] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:04.953 09:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:04.953 00:09:04.953 real 0m5.766s 00:09:04.953 user 0m8.140s 00:09:04.953 sys 0m1.035s 00:09:04.953 09:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:04.953 09:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.953 ************************************ 00:09:04.953 END TEST raid_superblock_test 00:09:04.953 ************************************ 00:09:04.953 09:27:53 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:04.953 09:27:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:04.953 09:27:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:04.953 09:27:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:04.953 ************************************ 00:09:04.953 START TEST raid_read_error_test 00:09:04.953 ************************************ 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 read 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1zrLlg4fpn 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65629 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65629 00:09:04.953 09:27:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 65629 ']' 00:09:04.954 09:27:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.954 09:27:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:04.954 09:27:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.954 09:27:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:04.954 09:27:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.954 [2024-11-15 09:27:53.381085] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:09:04.954 [2024-11-15 09:27:53.381260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65629 ] 00:09:05.212 [2024-11-15 09:27:53.569023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.471 [2024-11-15 09:27:53.708239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.729 [2024-11-15 09:27:53.940215] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.729 [2024-11-15 09:27:53.940296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.987 BaseBdev1_malloc 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.987 true 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.987 [2024-11-15 09:27:54.363814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:05.987 [2024-11-15 09:27:54.363915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.987 [2024-11-15 09:27:54.363940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:05.987 [2024-11-15 09:27:54.363953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.987 [2024-11-15 09:27:54.366601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.987 [2024-11-15 09:27:54.366657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:05.987 BaseBdev1 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.987 BaseBdev2_malloc 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.987 true 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.987 [2024-11-15 09:27:54.438699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:05.987 [2024-11-15 09:27:54.438881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.987 [2024-11-15 09:27:54.438907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:05.987 [2024-11-15 09:27:54.438920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.987 [2024-11-15 09:27:54.441463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.987 [2024-11-15 09:27:54.441512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:05.987 BaseBdev2 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.987 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.245 BaseBdev3_malloc 00:09:06.245 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.245 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:06.245 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.245 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.245 true 00:09:06.245 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.245 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:06.245 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.245 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.245 [2024-11-15 09:27:54.523865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:06.245 [2024-11-15 09:27:54.523953] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.245 [2024-11-15 09:27:54.523979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:06.245 [2024-11-15 09:27:54.523992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.245 [2024-11-15 09:27:54.526632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.245 [2024-11-15 09:27:54.526688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:06.245 BaseBdev3 00:09:06.245 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.245 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:06.245 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.245 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.245 [2024-11-15 09:27:54.535947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:06.245 [2024-11-15 09:27:54.538179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:06.245 [2024-11-15 09:27:54.538283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:06.245 [2024-11-15 09:27:54.538532] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:06.245 [2024-11-15 09:27:54.538549] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:06.245 [2024-11-15 09:27:54.538891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:06.245 [2024-11-15 09:27:54.539089] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:06.245 [2024-11-15 09:27:54.539106] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:06.245 [2024-11-15 09:27:54.539327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.245 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.245 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:06.245 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:06.245 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.245 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.246 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.246 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.246 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.246 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.246 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.246 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.246 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.246 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.246 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.246 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.246 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.246 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.246 "name": "raid_bdev1", 00:09:06.246 "uuid": "c742bcbd-e550-447d-844d-8784d36c1d97", 00:09:06.246 "strip_size_kb": 64, 00:09:06.246 "state": "online", 00:09:06.246 "raid_level": "raid0", 00:09:06.246 "superblock": true, 00:09:06.246 "num_base_bdevs": 3, 00:09:06.246 "num_base_bdevs_discovered": 3, 00:09:06.246 "num_base_bdevs_operational": 3, 00:09:06.246 "base_bdevs_list": [ 00:09:06.246 { 00:09:06.246 "name": "BaseBdev1", 00:09:06.246 "uuid": "b7398ba4-d646-540f-89e6-21193b49558a", 00:09:06.246 "is_configured": true, 00:09:06.246 "data_offset": 2048, 00:09:06.246 "data_size": 63488 00:09:06.246 }, 00:09:06.246 { 00:09:06.246 "name": "BaseBdev2", 00:09:06.246 "uuid": "ca500310-e9e1-5214-9a0d-3c02d62de0d1", 00:09:06.246 "is_configured": true, 00:09:06.246 "data_offset": 2048, 00:09:06.246 "data_size": 63488 00:09:06.246 }, 00:09:06.246 { 00:09:06.246 "name": "BaseBdev3", 00:09:06.246 "uuid": "e281e84f-d995-5a83-a884-0bc8cc6f53d7", 00:09:06.246 "is_configured": true, 00:09:06.246 "data_offset": 2048, 00:09:06.246 "data_size": 63488 00:09:06.246 } 00:09:06.246 ] 00:09:06.246 }' 00:09:06.246 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.246 09:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.810 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:06.810 09:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:06.810 [2024-11-15 09:27:55.112569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:07.741 09:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:07.741 09:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.741 09:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.741 09:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.741 09:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:07.741 09:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:07.742 09:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:07.742 09:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:07.742 09:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.742 09:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.742 09:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.742 09:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.742 09:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.742 09:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.742 09:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.742 09:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.742 09:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.742 09:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.742 09:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.742 09:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.742 09:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.742 09:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.742 09:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.742 "name": "raid_bdev1", 00:09:07.742 "uuid": "c742bcbd-e550-447d-844d-8784d36c1d97", 00:09:07.742 "strip_size_kb": 64, 00:09:07.742 "state": "online", 00:09:07.742 "raid_level": "raid0", 00:09:07.742 "superblock": true, 00:09:07.742 "num_base_bdevs": 3, 00:09:07.742 "num_base_bdevs_discovered": 3, 00:09:07.742 "num_base_bdevs_operational": 3, 00:09:07.742 "base_bdevs_list": [ 00:09:07.742 { 00:09:07.742 "name": "BaseBdev1", 00:09:07.742 "uuid": "b7398ba4-d646-540f-89e6-21193b49558a", 00:09:07.742 "is_configured": true, 00:09:07.742 "data_offset": 2048, 00:09:07.742 "data_size": 63488 00:09:07.742 }, 00:09:07.742 { 00:09:07.742 "name": "BaseBdev2", 00:09:07.742 "uuid": "ca500310-e9e1-5214-9a0d-3c02d62de0d1", 00:09:07.742 "is_configured": true, 00:09:07.742 "data_offset": 2048, 00:09:07.742 "data_size": 63488 00:09:07.742 }, 00:09:07.742 { 00:09:07.742 "name": "BaseBdev3", 00:09:07.742 "uuid": "e281e84f-d995-5a83-a884-0bc8cc6f53d7", 00:09:07.742 "is_configured": true, 00:09:07.742 "data_offset": 2048, 00:09:07.742 "data_size": 63488 00:09:07.742 } 00:09:07.742 ] 00:09:07.742 }' 00:09:07.742 09:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.742 09:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.307 09:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:08.307 09:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.307 09:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.307 [2024-11-15 09:27:56.522002] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:08.307 [2024-11-15 09:27:56.522146] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:08.307 [2024-11-15 09:27:56.525390] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:08.307 [2024-11-15 09:27:56.525443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.307 [2024-11-15 09:27:56.525485] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:08.307 [2024-11-15 09:27:56.525496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:08.307 { 00:09:08.307 "results": [ 00:09:08.307 { 00:09:08.307 "job": "raid_bdev1", 00:09:08.307 "core_mask": "0x1", 00:09:08.307 "workload": "randrw", 00:09:08.307 "percentage": 50, 00:09:08.307 "status": "finished", 00:09:08.307 "queue_depth": 1, 00:09:08.307 "io_size": 131072, 00:09:08.307 "runtime": 1.410078, 00:09:08.307 "iops": 13363.090552437525, 00:09:08.307 "mibps": 1670.3863190546906, 00:09:08.307 "io_failed": 1, 00:09:08.307 "io_timeout": 0, 00:09:08.307 "avg_latency_us": 103.9147408416055, 00:09:08.307 "min_latency_us": 31.524890829694325, 00:09:08.307 "max_latency_us": 1738.564192139738 00:09:08.307 } 00:09:08.307 ], 00:09:08.307 "core_count": 1 00:09:08.307 } 00:09:08.307 09:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.307 09:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65629 00:09:08.307 09:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 65629 ']' 00:09:08.307 09:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 65629 00:09:08.307 09:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:09:08.307 09:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:08.307 09:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65629 00:09:08.307 09:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:08.307 killing process with pid 65629 00:09:08.307 09:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:08.307 09:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65629' 00:09:08.307 09:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 65629 00:09:08.307 [2024-11-15 09:27:56.571684] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:08.307 09:27:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 65629 00:09:08.565 [2024-11-15 09:27:56.848429] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:09.941 09:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1zrLlg4fpn 00:09:09.941 09:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:09.941 09:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:09.941 09:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:09.941 09:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:09.941 09:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:09.941 09:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:09.941 09:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:09.941 00:09:09.941 real 0m4.864s 00:09:09.941 user 0m5.824s 00:09:09.941 sys 0m0.653s 00:09:09.941 09:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:09.941 ************************************ 00:09:09.941 END TEST raid_read_error_test 00:09:09.941 ************************************ 00:09:09.941 09:27:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.941 09:27:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:09.941 09:27:58 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:09.941 09:27:58 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:09.941 09:27:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:09.941 ************************************ 00:09:09.941 START TEST raid_write_error_test 00:09:09.941 ************************************ 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 write 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZDF6n96fxu 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65775 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65775 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 65775 ']' 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:09.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:09.941 09:27:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.941 [2024-11-15 09:27:58.311392] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:09:09.941 [2024-11-15 09:27:58.311561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65775 ] 00:09:10.200 [2024-11-15 09:27:58.494310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.200 [2024-11-15 09:27:58.614817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.458 [2024-11-15 09:27:58.828799] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.458 [2024-11-15 09:27:58.828999] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.038 BaseBdev1_malloc 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.038 true 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.038 [2024-11-15 09:27:59.281104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:11.038 [2024-11-15 09:27:59.281224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.038 [2024-11-15 09:27:59.281254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:11.038 [2024-11-15 09:27:59.281269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.038 [2024-11-15 09:27:59.283657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.038 [2024-11-15 09:27:59.283698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:11.038 BaseBdev1 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.038 BaseBdev2_malloc 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.038 true 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.038 [2024-11-15 09:27:59.348523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:11.038 [2024-11-15 09:27:59.348643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.038 [2024-11-15 09:27:59.348668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:11.038 [2024-11-15 09:27:59.348682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.038 [2024-11-15 09:27:59.350957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.038 [2024-11-15 09:27:59.350994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:11.038 BaseBdev2 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.038 BaseBdev3_malloc 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.038 true 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.038 [2024-11-15 09:27:59.431493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:11.038 [2024-11-15 09:27:59.431548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.038 [2024-11-15 09:27:59.431569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:11.038 [2024-11-15 09:27:59.431580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.038 [2024-11-15 09:27:59.433907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.038 [2024-11-15 09:27:59.433942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:11.038 BaseBdev3 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.038 [2024-11-15 09:27:59.439566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:11.038 [2024-11-15 09:27:59.441557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:11.038 [2024-11-15 09:27:59.441700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:11.038 [2024-11-15 09:27:59.441939] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:11.038 [2024-11-15 09:27:59.441955] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:11.038 [2024-11-15 09:27:59.442234] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:11.038 [2024-11-15 09:27:59.442397] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:11.038 [2024-11-15 09:27:59.442420] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:11.038 [2024-11-15 09:27:59.442584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.038 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.039 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.039 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.039 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.039 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.039 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.039 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.039 "name": "raid_bdev1", 00:09:11.039 "uuid": "207a2309-837e-4649-a2f7-64a570c96b85", 00:09:11.039 "strip_size_kb": 64, 00:09:11.039 "state": "online", 00:09:11.039 "raid_level": "raid0", 00:09:11.039 "superblock": true, 00:09:11.039 "num_base_bdevs": 3, 00:09:11.039 "num_base_bdevs_discovered": 3, 00:09:11.039 "num_base_bdevs_operational": 3, 00:09:11.039 "base_bdevs_list": [ 00:09:11.039 { 00:09:11.039 "name": "BaseBdev1", 00:09:11.039 "uuid": "e5c1f41a-9d08-5159-927e-8b042876bcec", 00:09:11.039 "is_configured": true, 00:09:11.039 "data_offset": 2048, 00:09:11.039 "data_size": 63488 00:09:11.039 }, 00:09:11.039 { 00:09:11.039 "name": "BaseBdev2", 00:09:11.039 "uuid": "ae108052-1ea5-515e-9efc-2e185b31559a", 00:09:11.039 "is_configured": true, 00:09:11.039 "data_offset": 2048, 00:09:11.039 "data_size": 63488 00:09:11.039 }, 00:09:11.039 { 00:09:11.039 "name": "BaseBdev3", 00:09:11.039 "uuid": "6f394c03-ab64-5665-a09c-cd241abe8c4c", 00:09:11.039 "is_configured": true, 00:09:11.039 "data_offset": 2048, 00:09:11.039 "data_size": 63488 00:09:11.039 } 00:09:11.039 ] 00:09:11.039 }' 00:09:11.039 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.039 09:27:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.607 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:11.607 09:27:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:11.607 [2024-11-15 09:28:00.016049] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:12.575 09:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:12.575 09:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.575 09:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.575 09:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.575 09:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:12.575 09:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:12.575 09:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:12.575 09:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:12.575 09:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:12.575 09:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.575 09:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.575 09:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.575 09:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.575 09:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.576 09:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.576 09:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.576 09:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.576 09:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.576 09:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.576 09:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.576 09:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.576 09:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.576 09:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.576 "name": "raid_bdev1", 00:09:12.576 "uuid": "207a2309-837e-4649-a2f7-64a570c96b85", 00:09:12.576 "strip_size_kb": 64, 00:09:12.576 "state": "online", 00:09:12.576 "raid_level": "raid0", 00:09:12.576 "superblock": true, 00:09:12.576 "num_base_bdevs": 3, 00:09:12.576 "num_base_bdevs_discovered": 3, 00:09:12.576 "num_base_bdevs_operational": 3, 00:09:12.576 "base_bdevs_list": [ 00:09:12.576 { 00:09:12.576 "name": "BaseBdev1", 00:09:12.576 "uuid": "e5c1f41a-9d08-5159-927e-8b042876bcec", 00:09:12.576 "is_configured": true, 00:09:12.576 "data_offset": 2048, 00:09:12.576 "data_size": 63488 00:09:12.576 }, 00:09:12.576 { 00:09:12.576 "name": "BaseBdev2", 00:09:12.576 "uuid": "ae108052-1ea5-515e-9efc-2e185b31559a", 00:09:12.576 "is_configured": true, 00:09:12.576 "data_offset": 2048, 00:09:12.576 "data_size": 63488 00:09:12.576 }, 00:09:12.576 { 00:09:12.576 "name": "BaseBdev3", 00:09:12.576 "uuid": "6f394c03-ab64-5665-a09c-cd241abe8c4c", 00:09:12.576 "is_configured": true, 00:09:12.576 "data_offset": 2048, 00:09:12.576 "data_size": 63488 00:09:12.576 } 00:09:12.576 ] 00:09:12.576 }' 00:09:12.576 09:28:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.576 09:28:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.143 09:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:13.143 09:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.143 09:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.143 [2024-11-15 09:28:01.384810] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:13.143 [2024-11-15 09:28:01.384987] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:13.143 [2024-11-15 09:28:01.387736] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.143 [2024-11-15 09:28:01.387855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.143 [2024-11-15 09:28:01.387916] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:13.143 [2024-11-15 09:28:01.387967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:13.143 { 00:09:13.143 "results": [ 00:09:13.143 { 00:09:13.143 "job": "raid_bdev1", 00:09:13.143 "core_mask": "0x1", 00:09:13.143 "workload": "randrw", 00:09:13.143 "percentage": 50, 00:09:13.143 "status": "finished", 00:09:13.143 "queue_depth": 1, 00:09:13.143 "io_size": 131072, 00:09:13.143 "runtime": 1.369569, 00:09:13.143 "iops": 14695.864173327522, 00:09:13.143 "mibps": 1836.9830216659402, 00:09:13.143 "io_failed": 1, 00:09:13.143 "io_timeout": 0, 00:09:13.143 "avg_latency_us": 94.64191445491214, 00:09:13.143 "min_latency_us": 26.717903930131005, 00:09:13.143 "max_latency_us": 1552.5449781659388 00:09:13.143 } 00:09:13.143 ], 00:09:13.143 "core_count": 1 00:09:13.143 } 00:09:13.143 09:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.143 09:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65775 00:09:13.143 09:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 65775 ']' 00:09:13.143 09:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 65775 00:09:13.143 09:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:13.143 09:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:13.143 09:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65775 00:09:13.143 09:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:13.143 09:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:13.143 09:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65775' 00:09:13.143 killing process with pid 65775 00:09:13.143 09:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 65775 00:09:13.143 [2024-11-15 09:28:01.432771] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:13.143 09:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 65775 00:09:13.402 [2024-11-15 09:28:01.693586] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:14.780 09:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:14.780 09:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZDF6n96fxu 00:09:14.780 09:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:14.780 09:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:14.780 09:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:14.780 09:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:14.780 09:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:14.780 09:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:14.780 00:09:14.780 real 0m4.799s 00:09:14.780 user 0m5.690s 00:09:14.780 sys 0m0.620s 00:09:14.780 09:28:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:14.780 ************************************ 00:09:14.780 END TEST raid_write_error_test 00:09:14.780 ************************************ 00:09:14.780 09:28:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.780 09:28:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:14.780 09:28:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:14.780 09:28:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:14.780 09:28:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:14.780 09:28:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:14.780 ************************************ 00:09:14.780 START TEST raid_state_function_test 00:09:14.780 ************************************ 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 false 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65919 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:14.780 Process raid pid: 65919 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65919' 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65919 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 65919 ']' 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:14.780 09:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.780 [2024-11-15 09:28:03.156652] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:09:14.780 [2024-11-15 09:28:03.156885] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.040 [2024-11-15 09:28:03.338034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.040 [2024-11-15 09:28:03.467739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.298 [2024-11-15 09:28:03.698987] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.298 [2024-11-15 09:28:03.699034] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:15.866 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:15.866 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:15.866 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:15.866 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.866 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.866 [2024-11-15 09:28:04.086297] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:15.866 [2024-11-15 09:28:04.086368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:15.866 [2024-11-15 09:28:04.086380] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:15.866 [2024-11-15 09:28:04.086392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:15.866 [2024-11-15 09:28:04.086399] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:15.866 [2024-11-15 09:28:04.086410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:15.866 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.866 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:15.866 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.866 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.866 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.866 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.866 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.866 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.866 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.866 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.866 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.866 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.866 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.866 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.866 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.866 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.866 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.866 "name": "Existed_Raid", 00:09:15.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.866 "strip_size_kb": 64, 00:09:15.866 "state": "configuring", 00:09:15.866 "raid_level": "concat", 00:09:15.866 "superblock": false, 00:09:15.866 "num_base_bdevs": 3, 00:09:15.866 "num_base_bdevs_discovered": 0, 00:09:15.866 "num_base_bdevs_operational": 3, 00:09:15.866 "base_bdevs_list": [ 00:09:15.866 { 00:09:15.866 "name": "BaseBdev1", 00:09:15.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.866 "is_configured": false, 00:09:15.866 "data_offset": 0, 00:09:15.866 "data_size": 0 00:09:15.866 }, 00:09:15.866 { 00:09:15.866 "name": "BaseBdev2", 00:09:15.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.866 "is_configured": false, 00:09:15.866 "data_offset": 0, 00:09:15.866 "data_size": 0 00:09:15.866 }, 00:09:15.866 { 00:09:15.866 "name": "BaseBdev3", 00:09:15.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.866 "is_configured": false, 00:09:15.866 "data_offset": 0, 00:09:15.866 "data_size": 0 00:09:15.866 } 00:09:15.866 ] 00:09:15.866 }' 00:09:15.866 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.866 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.125 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:16.125 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.125 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.125 [2024-11-15 09:28:04.557435] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:16.125 [2024-11-15 09:28:04.557576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:16.125 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.125 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:16.125 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.125 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.125 [2024-11-15 09:28:04.569425] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:16.125 [2024-11-15 09:28:04.569584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:16.125 [2024-11-15 09:28:04.569617] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:16.125 [2024-11-15 09:28:04.569645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:16.125 [2024-11-15 09:28:04.569666] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:16.125 [2024-11-15 09:28:04.569691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:16.125 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.125 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:16.125 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.125 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.384 [2024-11-15 09:28:04.622682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:16.384 BaseBdev1 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.384 [ 00:09:16.384 { 00:09:16.384 "name": "BaseBdev1", 00:09:16.384 "aliases": [ 00:09:16.384 "4291d0f2-a875-4a88-9346-70463670fd93" 00:09:16.384 ], 00:09:16.384 "product_name": "Malloc disk", 00:09:16.384 "block_size": 512, 00:09:16.384 "num_blocks": 65536, 00:09:16.384 "uuid": "4291d0f2-a875-4a88-9346-70463670fd93", 00:09:16.384 "assigned_rate_limits": { 00:09:16.384 "rw_ios_per_sec": 0, 00:09:16.384 "rw_mbytes_per_sec": 0, 00:09:16.384 "r_mbytes_per_sec": 0, 00:09:16.384 "w_mbytes_per_sec": 0 00:09:16.384 }, 00:09:16.384 "claimed": true, 00:09:16.384 "claim_type": "exclusive_write", 00:09:16.384 "zoned": false, 00:09:16.384 "supported_io_types": { 00:09:16.384 "read": true, 00:09:16.384 "write": true, 00:09:16.384 "unmap": true, 00:09:16.384 "flush": true, 00:09:16.384 "reset": true, 00:09:16.384 "nvme_admin": false, 00:09:16.384 "nvme_io": false, 00:09:16.384 "nvme_io_md": false, 00:09:16.384 "write_zeroes": true, 00:09:16.384 "zcopy": true, 00:09:16.384 "get_zone_info": false, 00:09:16.384 "zone_management": false, 00:09:16.384 "zone_append": false, 00:09:16.384 "compare": false, 00:09:16.384 "compare_and_write": false, 00:09:16.384 "abort": true, 00:09:16.384 "seek_hole": false, 00:09:16.384 "seek_data": false, 00:09:16.384 "copy": true, 00:09:16.384 "nvme_iov_md": false 00:09:16.384 }, 00:09:16.384 "memory_domains": [ 00:09:16.384 { 00:09:16.384 "dma_device_id": "system", 00:09:16.384 "dma_device_type": 1 00:09:16.384 }, 00:09:16.384 { 00:09:16.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.384 "dma_device_type": 2 00:09:16.384 } 00:09:16.384 ], 00:09:16.384 "driver_specific": {} 00:09:16.384 } 00:09:16.384 ] 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.384 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.384 "name": "Existed_Raid", 00:09:16.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.384 "strip_size_kb": 64, 00:09:16.384 "state": "configuring", 00:09:16.384 "raid_level": "concat", 00:09:16.384 "superblock": false, 00:09:16.384 "num_base_bdevs": 3, 00:09:16.384 "num_base_bdevs_discovered": 1, 00:09:16.384 "num_base_bdevs_operational": 3, 00:09:16.384 "base_bdevs_list": [ 00:09:16.384 { 00:09:16.384 "name": "BaseBdev1", 00:09:16.384 "uuid": "4291d0f2-a875-4a88-9346-70463670fd93", 00:09:16.384 "is_configured": true, 00:09:16.384 "data_offset": 0, 00:09:16.384 "data_size": 65536 00:09:16.384 }, 00:09:16.384 { 00:09:16.384 "name": "BaseBdev2", 00:09:16.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.384 "is_configured": false, 00:09:16.384 "data_offset": 0, 00:09:16.384 "data_size": 0 00:09:16.384 }, 00:09:16.384 { 00:09:16.384 "name": "BaseBdev3", 00:09:16.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.385 "is_configured": false, 00:09:16.385 "data_offset": 0, 00:09:16.385 "data_size": 0 00:09:16.385 } 00:09:16.385 ] 00:09:16.385 }' 00:09:16.385 09:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.385 09:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.953 [2024-11-15 09:28:05.129910] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:16.953 [2024-11-15 09:28:05.130072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.953 [2024-11-15 09:28:05.141957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:16.953 [2024-11-15 09:28:05.144008] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:16.953 [2024-11-15 09:28:05.144085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:16.953 [2024-11-15 09:28:05.144098] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:16.953 [2024-11-15 09:28:05.144108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.953 "name": "Existed_Raid", 00:09:16.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.953 "strip_size_kb": 64, 00:09:16.953 "state": "configuring", 00:09:16.953 "raid_level": "concat", 00:09:16.953 "superblock": false, 00:09:16.953 "num_base_bdevs": 3, 00:09:16.953 "num_base_bdevs_discovered": 1, 00:09:16.953 "num_base_bdevs_operational": 3, 00:09:16.953 "base_bdevs_list": [ 00:09:16.953 { 00:09:16.953 "name": "BaseBdev1", 00:09:16.953 "uuid": "4291d0f2-a875-4a88-9346-70463670fd93", 00:09:16.953 "is_configured": true, 00:09:16.953 "data_offset": 0, 00:09:16.953 "data_size": 65536 00:09:16.953 }, 00:09:16.953 { 00:09:16.953 "name": "BaseBdev2", 00:09:16.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.953 "is_configured": false, 00:09:16.953 "data_offset": 0, 00:09:16.953 "data_size": 0 00:09:16.953 }, 00:09:16.953 { 00:09:16.953 "name": "BaseBdev3", 00:09:16.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.953 "is_configured": false, 00:09:16.953 "data_offset": 0, 00:09:16.953 "data_size": 0 00:09:16.953 } 00:09:16.953 ] 00:09:16.953 }' 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.953 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.213 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:17.213 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.213 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.213 [2024-11-15 09:28:05.637767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.213 BaseBdev2 00:09:17.213 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.213 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:17.213 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:17.213 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:17.213 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:17.213 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:17.213 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:17.213 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:17.213 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.213 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.213 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.213 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:17.213 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.213 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.213 [ 00:09:17.213 { 00:09:17.213 "name": "BaseBdev2", 00:09:17.213 "aliases": [ 00:09:17.213 "7e1f12e1-5e4d-4852-88a3-913755ca13cc" 00:09:17.213 ], 00:09:17.213 "product_name": "Malloc disk", 00:09:17.213 "block_size": 512, 00:09:17.213 "num_blocks": 65536, 00:09:17.213 "uuid": "7e1f12e1-5e4d-4852-88a3-913755ca13cc", 00:09:17.213 "assigned_rate_limits": { 00:09:17.213 "rw_ios_per_sec": 0, 00:09:17.213 "rw_mbytes_per_sec": 0, 00:09:17.213 "r_mbytes_per_sec": 0, 00:09:17.213 "w_mbytes_per_sec": 0 00:09:17.213 }, 00:09:17.213 "claimed": true, 00:09:17.213 "claim_type": "exclusive_write", 00:09:17.213 "zoned": false, 00:09:17.213 "supported_io_types": { 00:09:17.213 "read": true, 00:09:17.213 "write": true, 00:09:17.213 "unmap": true, 00:09:17.213 "flush": true, 00:09:17.213 "reset": true, 00:09:17.213 "nvme_admin": false, 00:09:17.213 "nvme_io": false, 00:09:17.213 "nvme_io_md": false, 00:09:17.213 "write_zeroes": true, 00:09:17.213 "zcopy": true, 00:09:17.213 "get_zone_info": false, 00:09:17.213 "zone_management": false, 00:09:17.213 "zone_append": false, 00:09:17.213 "compare": false, 00:09:17.213 "compare_and_write": false, 00:09:17.213 "abort": true, 00:09:17.213 "seek_hole": false, 00:09:17.213 "seek_data": false, 00:09:17.213 "copy": true, 00:09:17.213 "nvme_iov_md": false 00:09:17.213 }, 00:09:17.213 "memory_domains": [ 00:09:17.213 { 00:09:17.213 "dma_device_id": "system", 00:09:17.213 "dma_device_type": 1 00:09:17.213 }, 00:09:17.213 { 00:09:17.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.213 "dma_device_type": 2 00:09:17.213 } 00:09:17.473 ], 00:09:17.473 "driver_specific": {} 00:09:17.473 } 00:09:17.473 ] 00:09:17.473 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.473 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:17.473 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:17.473 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:17.473 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:17.473 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.473 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.473 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.473 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.473 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.473 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.473 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.473 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.473 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.473 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.473 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.473 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.473 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.473 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.473 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.473 "name": "Existed_Raid", 00:09:17.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.473 "strip_size_kb": 64, 00:09:17.473 "state": "configuring", 00:09:17.473 "raid_level": "concat", 00:09:17.473 "superblock": false, 00:09:17.473 "num_base_bdevs": 3, 00:09:17.473 "num_base_bdevs_discovered": 2, 00:09:17.473 "num_base_bdevs_operational": 3, 00:09:17.473 "base_bdevs_list": [ 00:09:17.473 { 00:09:17.473 "name": "BaseBdev1", 00:09:17.473 "uuid": "4291d0f2-a875-4a88-9346-70463670fd93", 00:09:17.473 "is_configured": true, 00:09:17.473 "data_offset": 0, 00:09:17.473 "data_size": 65536 00:09:17.473 }, 00:09:17.473 { 00:09:17.473 "name": "BaseBdev2", 00:09:17.473 "uuid": "7e1f12e1-5e4d-4852-88a3-913755ca13cc", 00:09:17.473 "is_configured": true, 00:09:17.473 "data_offset": 0, 00:09:17.473 "data_size": 65536 00:09:17.473 }, 00:09:17.473 { 00:09:17.473 "name": "BaseBdev3", 00:09:17.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.473 "is_configured": false, 00:09:17.473 "data_offset": 0, 00:09:17.473 "data_size": 0 00:09:17.473 } 00:09:17.473 ] 00:09:17.473 }' 00:09:17.473 09:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.473 09:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.732 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:17.732 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.732 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.992 [2024-11-15 09:28:06.207518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:17.992 [2024-11-15 09:28:06.207574] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:17.992 [2024-11-15 09:28:06.207588] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:17.992 [2024-11-15 09:28:06.207912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:17.992 [2024-11-15 09:28:06.208111] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:17.992 [2024-11-15 09:28:06.208124] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:17.992 [2024-11-15 09:28:06.208422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.992 BaseBdev3 00:09:17.992 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.992 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:17.992 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:17.992 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:17.992 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:17.992 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:17.992 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:17.992 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:17.992 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.992 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.992 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.992 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:17.992 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.992 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.992 [ 00:09:17.992 { 00:09:17.992 "name": "BaseBdev3", 00:09:17.992 "aliases": [ 00:09:17.992 "ee88bd35-dd30-43da-b58a-a2f35a4a4c1d" 00:09:17.992 ], 00:09:17.992 "product_name": "Malloc disk", 00:09:17.992 "block_size": 512, 00:09:17.992 "num_blocks": 65536, 00:09:17.992 "uuid": "ee88bd35-dd30-43da-b58a-a2f35a4a4c1d", 00:09:17.992 "assigned_rate_limits": { 00:09:17.992 "rw_ios_per_sec": 0, 00:09:17.992 "rw_mbytes_per_sec": 0, 00:09:17.992 "r_mbytes_per_sec": 0, 00:09:17.992 "w_mbytes_per_sec": 0 00:09:17.992 }, 00:09:17.992 "claimed": true, 00:09:17.992 "claim_type": "exclusive_write", 00:09:17.992 "zoned": false, 00:09:17.992 "supported_io_types": { 00:09:17.992 "read": true, 00:09:17.992 "write": true, 00:09:17.992 "unmap": true, 00:09:17.992 "flush": true, 00:09:17.992 "reset": true, 00:09:17.992 "nvme_admin": false, 00:09:17.992 "nvme_io": false, 00:09:17.992 "nvme_io_md": false, 00:09:17.992 "write_zeroes": true, 00:09:17.992 "zcopy": true, 00:09:17.992 "get_zone_info": false, 00:09:17.992 "zone_management": false, 00:09:17.992 "zone_append": false, 00:09:17.992 "compare": false, 00:09:17.992 "compare_and_write": false, 00:09:17.992 "abort": true, 00:09:17.992 "seek_hole": false, 00:09:17.992 "seek_data": false, 00:09:17.992 "copy": true, 00:09:17.992 "nvme_iov_md": false 00:09:17.992 }, 00:09:17.992 "memory_domains": [ 00:09:17.992 { 00:09:17.992 "dma_device_id": "system", 00:09:17.992 "dma_device_type": 1 00:09:17.992 }, 00:09:17.992 { 00:09:17.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.992 "dma_device_type": 2 00:09:17.992 } 00:09:17.992 ], 00:09:17.992 "driver_specific": {} 00:09:17.992 } 00:09:17.992 ] 00:09:17.992 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.992 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:17.992 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:17.992 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:17.992 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:17.993 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.993 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.993 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.993 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.993 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.993 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.993 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.993 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.993 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.993 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.993 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.993 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.993 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.993 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.993 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.993 "name": "Existed_Raid", 00:09:17.993 "uuid": "85ff5b0e-fcc6-4cf2-ab58-140da1b044da", 00:09:17.993 "strip_size_kb": 64, 00:09:17.993 "state": "online", 00:09:17.993 "raid_level": "concat", 00:09:17.993 "superblock": false, 00:09:17.993 "num_base_bdevs": 3, 00:09:17.993 "num_base_bdevs_discovered": 3, 00:09:17.993 "num_base_bdevs_operational": 3, 00:09:17.993 "base_bdevs_list": [ 00:09:17.993 { 00:09:17.993 "name": "BaseBdev1", 00:09:17.993 "uuid": "4291d0f2-a875-4a88-9346-70463670fd93", 00:09:17.993 "is_configured": true, 00:09:17.993 "data_offset": 0, 00:09:17.993 "data_size": 65536 00:09:17.993 }, 00:09:17.993 { 00:09:17.993 "name": "BaseBdev2", 00:09:17.993 "uuid": "7e1f12e1-5e4d-4852-88a3-913755ca13cc", 00:09:17.993 "is_configured": true, 00:09:17.993 "data_offset": 0, 00:09:17.993 "data_size": 65536 00:09:17.993 }, 00:09:17.993 { 00:09:17.993 "name": "BaseBdev3", 00:09:17.993 "uuid": "ee88bd35-dd30-43da-b58a-a2f35a4a4c1d", 00:09:17.993 "is_configured": true, 00:09:17.993 "data_offset": 0, 00:09:17.993 "data_size": 65536 00:09:17.993 } 00:09:17.993 ] 00:09:17.993 }' 00:09:17.993 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.993 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.562 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:18.562 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:18.562 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:18.562 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:18.562 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:18.562 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:18.562 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:18.562 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:18.562 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.562 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.562 [2024-11-15 09:28:06.727054] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.562 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.562 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:18.562 "name": "Existed_Raid", 00:09:18.562 "aliases": [ 00:09:18.562 "85ff5b0e-fcc6-4cf2-ab58-140da1b044da" 00:09:18.562 ], 00:09:18.562 "product_name": "Raid Volume", 00:09:18.562 "block_size": 512, 00:09:18.562 "num_blocks": 196608, 00:09:18.562 "uuid": "85ff5b0e-fcc6-4cf2-ab58-140da1b044da", 00:09:18.562 "assigned_rate_limits": { 00:09:18.562 "rw_ios_per_sec": 0, 00:09:18.562 "rw_mbytes_per_sec": 0, 00:09:18.562 "r_mbytes_per_sec": 0, 00:09:18.562 "w_mbytes_per_sec": 0 00:09:18.562 }, 00:09:18.563 "claimed": false, 00:09:18.563 "zoned": false, 00:09:18.563 "supported_io_types": { 00:09:18.563 "read": true, 00:09:18.563 "write": true, 00:09:18.563 "unmap": true, 00:09:18.563 "flush": true, 00:09:18.563 "reset": true, 00:09:18.563 "nvme_admin": false, 00:09:18.563 "nvme_io": false, 00:09:18.563 "nvme_io_md": false, 00:09:18.563 "write_zeroes": true, 00:09:18.563 "zcopy": false, 00:09:18.563 "get_zone_info": false, 00:09:18.563 "zone_management": false, 00:09:18.563 "zone_append": false, 00:09:18.563 "compare": false, 00:09:18.563 "compare_and_write": false, 00:09:18.563 "abort": false, 00:09:18.563 "seek_hole": false, 00:09:18.563 "seek_data": false, 00:09:18.563 "copy": false, 00:09:18.563 "nvme_iov_md": false 00:09:18.563 }, 00:09:18.563 "memory_domains": [ 00:09:18.563 { 00:09:18.563 "dma_device_id": "system", 00:09:18.563 "dma_device_type": 1 00:09:18.563 }, 00:09:18.563 { 00:09:18.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.563 "dma_device_type": 2 00:09:18.563 }, 00:09:18.563 { 00:09:18.563 "dma_device_id": "system", 00:09:18.563 "dma_device_type": 1 00:09:18.563 }, 00:09:18.563 { 00:09:18.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.563 "dma_device_type": 2 00:09:18.563 }, 00:09:18.563 { 00:09:18.563 "dma_device_id": "system", 00:09:18.563 "dma_device_type": 1 00:09:18.563 }, 00:09:18.563 { 00:09:18.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.563 "dma_device_type": 2 00:09:18.563 } 00:09:18.563 ], 00:09:18.563 "driver_specific": { 00:09:18.563 "raid": { 00:09:18.563 "uuid": "85ff5b0e-fcc6-4cf2-ab58-140da1b044da", 00:09:18.563 "strip_size_kb": 64, 00:09:18.563 "state": "online", 00:09:18.563 "raid_level": "concat", 00:09:18.563 "superblock": false, 00:09:18.563 "num_base_bdevs": 3, 00:09:18.563 "num_base_bdevs_discovered": 3, 00:09:18.563 "num_base_bdevs_operational": 3, 00:09:18.563 "base_bdevs_list": [ 00:09:18.563 { 00:09:18.563 "name": "BaseBdev1", 00:09:18.563 "uuid": "4291d0f2-a875-4a88-9346-70463670fd93", 00:09:18.563 "is_configured": true, 00:09:18.563 "data_offset": 0, 00:09:18.563 "data_size": 65536 00:09:18.563 }, 00:09:18.563 { 00:09:18.563 "name": "BaseBdev2", 00:09:18.563 "uuid": "7e1f12e1-5e4d-4852-88a3-913755ca13cc", 00:09:18.563 "is_configured": true, 00:09:18.563 "data_offset": 0, 00:09:18.563 "data_size": 65536 00:09:18.563 }, 00:09:18.563 { 00:09:18.563 "name": "BaseBdev3", 00:09:18.563 "uuid": "ee88bd35-dd30-43da-b58a-a2f35a4a4c1d", 00:09:18.563 "is_configured": true, 00:09:18.563 "data_offset": 0, 00:09:18.563 "data_size": 65536 00:09:18.563 } 00:09:18.563 ] 00:09:18.563 } 00:09:18.563 } 00:09:18.563 }' 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:18.563 BaseBdev2 00:09:18.563 BaseBdev3' 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.563 09:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.563 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.563 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.563 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:18.563 09:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.563 09:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.823 [2024-11-15 09:28:07.030278] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:18.823 [2024-11-15 09:28:07.030313] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:18.823 [2024-11-15 09:28:07.030370] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:18.823 09:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.823 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:18.823 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:18.823 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:18.823 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:18.823 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:18.823 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:18.823 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.823 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:18.823 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.823 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.823 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:18.823 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.823 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.823 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.823 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.823 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.823 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.823 09:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.823 09:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.823 09:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.823 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.823 "name": "Existed_Raid", 00:09:18.823 "uuid": "85ff5b0e-fcc6-4cf2-ab58-140da1b044da", 00:09:18.823 "strip_size_kb": 64, 00:09:18.823 "state": "offline", 00:09:18.823 "raid_level": "concat", 00:09:18.823 "superblock": false, 00:09:18.823 "num_base_bdevs": 3, 00:09:18.823 "num_base_bdevs_discovered": 2, 00:09:18.823 "num_base_bdevs_operational": 2, 00:09:18.823 "base_bdevs_list": [ 00:09:18.823 { 00:09:18.823 "name": null, 00:09:18.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.823 "is_configured": false, 00:09:18.823 "data_offset": 0, 00:09:18.823 "data_size": 65536 00:09:18.823 }, 00:09:18.823 { 00:09:18.823 "name": "BaseBdev2", 00:09:18.823 "uuid": "7e1f12e1-5e4d-4852-88a3-913755ca13cc", 00:09:18.823 "is_configured": true, 00:09:18.823 "data_offset": 0, 00:09:18.823 "data_size": 65536 00:09:18.823 }, 00:09:18.823 { 00:09:18.823 "name": "BaseBdev3", 00:09:18.823 "uuid": "ee88bd35-dd30-43da-b58a-a2f35a4a4c1d", 00:09:18.823 "is_configured": true, 00:09:18.823 "data_offset": 0, 00:09:18.823 "data_size": 65536 00:09:18.823 } 00:09:18.823 ] 00:09:18.823 }' 00:09:18.823 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.823 09:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.398 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:19.398 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:19.398 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.398 09:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.398 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:19.398 09:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.398 09:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.398 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:19.398 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:19.398 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:19.399 09:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.399 09:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.399 [2024-11-15 09:28:07.648674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:19.399 09:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.399 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:19.399 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:19.399 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.399 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:19.399 09:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.399 09:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.399 09:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.399 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:19.399 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:19.399 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:19.399 09:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.399 09:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.399 [2024-11-15 09:28:07.816181] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:19.399 [2024-11-15 09:28:07.816344] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:19.670 09:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.670 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:19.670 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:19.670 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.670 09:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.670 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:19.670 09:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.670 09:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.670 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:19.670 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:19.670 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:19.670 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:19.670 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:19.670 09:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:19.670 09:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.670 09:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.670 BaseBdev2 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.670 [ 00:09:19.670 { 00:09:19.670 "name": "BaseBdev2", 00:09:19.670 "aliases": [ 00:09:19.670 "2ea3cb4d-234a-4e49-a20f-51200d610d04" 00:09:19.670 ], 00:09:19.670 "product_name": "Malloc disk", 00:09:19.670 "block_size": 512, 00:09:19.670 "num_blocks": 65536, 00:09:19.670 "uuid": "2ea3cb4d-234a-4e49-a20f-51200d610d04", 00:09:19.670 "assigned_rate_limits": { 00:09:19.670 "rw_ios_per_sec": 0, 00:09:19.670 "rw_mbytes_per_sec": 0, 00:09:19.670 "r_mbytes_per_sec": 0, 00:09:19.670 "w_mbytes_per_sec": 0 00:09:19.670 }, 00:09:19.670 "claimed": false, 00:09:19.670 "zoned": false, 00:09:19.670 "supported_io_types": { 00:09:19.670 "read": true, 00:09:19.670 "write": true, 00:09:19.670 "unmap": true, 00:09:19.670 "flush": true, 00:09:19.670 "reset": true, 00:09:19.670 "nvme_admin": false, 00:09:19.670 "nvme_io": false, 00:09:19.670 "nvme_io_md": false, 00:09:19.670 "write_zeroes": true, 00:09:19.670 "zcopy": true, 00:09:19.670 "get_zone_info": false, 00:09:19.670 "zone_management": false, 00:09:19.670 "zone_append": false, 00:09:19.670 "compare": false, 00:09:19.670 "compare_and_write": false, 00:09:19.670 "abort": true, 00:09:19.670 "seek_hole": false, 00:09:19.670 "seek_data": false, 00:09:19.670 "copy": true, 00:09:19.670 "nvme_iov_md": false 00:09:19.670 }, 00:09:19.670 "memory_domains": [ 00:09:19.670 { 00:09:19.670 "dma_device_id": "system", 00:09:19.670 "dma_device_type": 1 00:09:19.670 }, 00:09:19.670 { 00:09:19.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.670 "dma_device_type": 2 00:09:19.670 } 00:09:19.670 ], 00:09:19.670 "driver_specific": {} 00:09:19.670 } 00:09:19.670 ] 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.670 BaseBdev3 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.670 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.670 [ 00:09:19.670 { 00:09:19.670 "name": "BaseBdev3", 00:09:19.670 "aliases": [ 00:09:19.670 "4718eb79-5048-4caf-af29-7357e0090b47" 00:09:19.670 ], 00:09:19.670 "product_name": "Malloc disk", 00:09:19.670 "block_size": 512, 00:09:19.670 "num_blocks": 65536, 00:09:19.670 "uuid": "4718eb79-5048-4caf-af29-7357e0090b47", 00:09:19.670 "assigned_rate_limits": { 00:09:19.670 "rw_ios_per_sec": 0, 00:09:19.670 "rw_mbytes_per_sec": 0, 00:09:19.670 "r_mbytes_per_sec": 0, 00:09:19.670 "w_mbytes_per_sec": 0 00:09:19.670 }, 00:09:19.670 "claimed": false, 00:09:19.670 "zoned": false, 00:09:19.670 "supported_io_types": { 00:09:19.670 "read": true, 00:09:19.670 "write": true, 00:09:19.670 "unmap": true, 00:09:19.670 "flush": true, 00:09:19.670 "reset": true, 00:09:19.670 "nvme_admin": false, 00:09:19.670 "nvme_io": false, 00:09:19.670 "nvme_io_md": false, 00:09:19.670 "write_zeroes": true, 00:09:19.670 "zcopy": true, 00:09:19.670 "get_zone_info": false, 00:09:19.670 "zone_management": false, 00:09:19.670 "zone_append": false, 00:09:19.670 "compare": false, 00:09:19.670 "compare_and_write": false, 00:09:19.670 "abort": true, 00:09:19.670 "seek_hole": false, 00:09:19.670 "seek_data": false, 00:09:19.670 "copy": true, 00:09:19.670 "nvme_iov_md": false 00:09:19.930 }, 00:09:19.930 "memory_domains": [ 00:09:19.930 { 00:09:19.930 "dma_device_id": "system", 00:09:19.930 "dma_device_type": 1 00:09:19.930 }, 00:09:19.930 { 00:09:19.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.930 "dma_device_type": 2 00:09:19.930 } 00:09:19.930 ], 00:09:19.930 "driver_specific": {} 00:09:19.930 } 00:09:19.930 ] 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.930 [2024-11-15 09:28:08.142928] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:19.930 [2024-11-15 09:28:08.143042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:19.930 [2024-11-15 09:28:08.143086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:19.930 [2024-11-15 09:28:08.145027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.930 "name": "Existed_Raid", 00:09:19.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.930 "strip_size_kb": 64, 00:09:19.930 "state": "configuring", 00:09:19.930 "raid_level": "concat", 00:09:19.930 "superblock": false, 00:09:19.930 "num_base_bdevs": 3, 00:09:19.930 "num_base_bdevs_discovered": 2, 00:09:19.930 "num_base_bdevs_operational": 3, 00:09:19.930 "base_bdevs_list": [ 00:09:19.930 { 00:09:19.930 "name": "BaseBdev1", 00:09:19.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.930 "is_configured": false, 00:09:19.930 "data_offset": 0, 00:09:19.930 "data_size": 0 00:09:19.930 }, 00:09:19.930 { 00:09:19.930 "name": "BaseBdev2", 00:09:19.930 "uuid": "2ea3cb4d-234a-4e49-a20f-51200d610d04", 00:09:19.930 "is_configured": true, 00:09:19.930 "data_offset": 0, 00:09:19.930 "data_size": 65536 00:09:19.930 }, 00:09:19.930 { 00:09:19.930 "name": "BaseBdev3", 00:09:19.930 "uuid": "4718eb79-5048-4caf-af29-7357e0090b47", 00:09:19.930 "is_configured": true, 00:09:19.930 "data_offset": 0, 00:09:19.930 "data_size": 65536 00:09:19.930 } 00:09:19.930 ] 00:09:19.930 }' 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.930 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.190 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:20.190 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.190 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.190 [2024-11-15 09:28:08.638071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:20.190 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.190 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:20.190 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.190 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.190 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.190 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.190 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.190 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.190 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.190 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.190 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.190 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.190 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.190 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.190 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.449 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.449 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.449 "name": "Existed_Raid", 00:09:20.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.449 "strip_size_kb": 64, 00:09:20.449 "state": "configuring", 00:09:20.449 "raid_level": "concat", 00:09:20.449 "superblock": false, 00:09:20.449 "num_base_bdevs": 3, 00:09:20.449 "num_base_bdevs_discovered": 1, 00:09:20.449 "num_base_bdevs_operational": 3, 00:09:20.449 "base_bdevs_list": [ 00:09:20.449 { 00:09:20.449 "name": "BaseBdev1", 00:09:20.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.449 "is_configured": false, 00:09:20.449 "data_offset": 0, 00:09:20.449 "data_size": 0 00:09:20.449 }, 00:09:20.449 { 00:09:20.449 "name": null, 00:09:20.449 "uuid": "2ea3cb4d-234a-4e49-a20f-51200d610d04", 00:09:20.449 "is_configured": false, 00:09:20.449 "data_offset": 0, 00:09:20.449 "data_size": 65536 00:09:20.449 }, 00:09:20.449 { 00:09:20.449 "name": "BaseBdev3", 00:09:20.449 "uuid": "4718eb79-5048-4caf-af29-7357e0090b47", 00:09:20.449 "is_configured": true, 00:09:20.449 "data_offset": 0, 00:09:20.449 "data_size": 65536 00:09:20.449 } 00:09:20.449 ] 00:09:20.449 }' 00:09:20.449 09:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.449 09:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.708 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.708 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.708 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:20.708 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.708 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.708 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:20.708 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:20.708 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.708 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.708 [2024-11-15 09:28:09.160878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.708 BaseBdev1 00:09:20.708 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.708 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:20.708 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:20.708 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:20.708 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:20.708 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:20.708 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:20.708 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:20.708 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.708 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.967 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.967 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:20.967 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.967 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.967 [ 00:09:20.967 { 00:09:20.967 "name": "BaseBdev1", 00:09:20.967 "aliases": [ 00:09:20.967 "98c05a11-4d0b-40b0-abe2-e3fbfb688dbd" 00:09:20.967 ], 00:09:20.967 "product_name": "Malloc disk", 00:09:20.967 "block_size": 512, 00:09:20.967 "num_blocks": 65536, 00:09:20.967 "uuid": "98c05a11-4d0b-40b0-abe2-e3fbfb688dbd", 00:09:20.967 "assigned_rate_limits": { 00:09:20.967 "rw_ios_per_sec": 0, 00:09:20.967 "rw_mbytes_per_sec": 0, 00:09:20.967 "r_mbytes_per_sec": 0, 00:09:20.967 "w_mbytes_per_sec": 0 00:09:20.967 }, 00:09:20.967 "claimed": true, 00:09:20.967 "claim_type": "exclusive_write", 00:09:20.967 "zoned": false, 00:09:20.967 "supported_io_types": { 00:09:20.967 "read": true, 00:09:20.967 "write": true, 00:09:20.967 "unmap": true, 00:09:20.967 "flush": true, 00:09:20.967 "reset": true, 00:09:20.967 "nvme_admin": false, 00:09:20.967 "nvme_io": false, 00:09:20.967 "nvme_io_md": false, 00:09:20.967 "write_zeroes": true, 00:09:20.967 "zcopy": true, 00:09:20.967 "get_zone_info": false, 00:09:20.967 "zone_management": false, 00:09:20.967 "zone_append": false, 00:09:20.967 "compare": false, 00:09:20.967 "compare_and_write": false, 00:09:20.967 "abort": true, 00:09:20.967 "seek_hole": false, 00:09:20.967 "seek_data": false, 00:09:20.967 "copy": true, 00:09:20.967 "nvme_iov_md": false 00:09:20.967 }, 00:09:20.967 "memory_domains": [ 00:09:20.967 { 00:09:20.967 "dma_device_id": "system", 00:09:20.968 "dma_device_type": 1 00:09:20.968 }, 00:09:20.968 { 00:09:20.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.968 "dma_device_type": 2 00:09:20.968 } 00:09:20.968 ], 00:09:20.968 "driver_specific": {} 00:09:20.968 } 00:09:20.968 ] 00:09:20.968 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.968 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:20.968 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:20.968 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.968 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.968 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.968 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.968 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.968 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.968 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.968 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.968 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.968 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.968 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.968 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.968 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.968 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.968 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.968 "name": "Existed_Raid", 00:09:20.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.968 "strip_size_kb": 64, 00:09:20.968 "state": "configuring", 00:09:20.968 "raid_level": "concat", 00:09:20.968 "superblock": false, 00:09:20.968 "num_base_bdevs": 3, 00:09:20.968 "num_base_bdevs_discovered": 2, 00:09:20.968 "num_base_bdevs_operational": 3, 00:09:20.968 "base_bdevs_list": [ 00:09:20.968 { 00:09:20.968 "name": "BaseBdev1", 00:09:20.968 "uuid": "98c05a11-4d0b-40b0-abe2-e3fbfb688dbd", 00:09:20.968 "is_configured": true, 00:09:20.968 "data_offset": 0, 00:09:20.968 "data_size": 65536 00:09:20.968 }, 00:09:20.968 { 00:09:20.968 "name": null, 00:09:20.968 "uuid": "2ea3cb4d-234a-4e49-a20f-51200d610d04", 00:09:20.968 "is_configured": false, 00:09:20.968 "data_offset": 0, 00:09:20.968 "data_size": 65536 00:09:20.968 }, 00:09:20.968 { 00:09:20.968 "name": "BaseBdev3", 00:09:20.968 "uuid": "4718eb79-5048-4caf-af29-7357e0090b47", 00:09:20.968 "is_configured": true, 00:09:20.968 "data_offset": 0, 00:09:20.968 "data_size": 65536 00:09:20.968 } 00:09:20.968 ] 00:09:20.968 }' 00:09:20.968 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.968 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.535 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.536 [2024-11-15 09:28:09.760001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.536 "name": "Existed_Raid", 00:09:21.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.536 "strip_size_kb": 64, 00:09:21.536 "state": "configuring", 00:09:21.536 "raid_level": "concat", 00:09:21.536 "superblock": false, 00:09:21.536 "num_base_bdevs": 3, 00:09:21.536 "num_base_bdevs_discovered": 1, 00:09:21.536 "num_base_bdevs_operational": 3, 00:09:21.536 "base_bdevs_list": [ 00:09:21.536 { 00:09:21.536 "name": "BaseBdev1", 00:09:21.536 "uuid": "98c05a11-4d0b-40b0-abe2-e3fbfb688dbd", 00:09:21.536 "is_configured": true, 00:09:21.536 "data_offset": 0, 00:09:21.536 "data_size": 65536 00:09:21.536 }, 00:09:21.536 { 00:09:21.536 "name": null, 00:09:21.536 "uuid": "2ea3cb4d-234a-4e49-a20f-51200d610d04", 00:09:21.536 "is_configured": false, 00:09:21.536 "data_offset": 0, 00:09:21.536 "data_size": 65536 00:09:21.536 }, 00:09:21.536 { 00:09:21.536 "name": null, 00:09:21.536 "uuid": "4718eb79-5048-4caf-af29-7357e0090b47", 00:09:21.536 "is_configured": false, 00:09:21.536 "data_offset": 0, 00:09:21.536 "data_size": 65536 00:09:21.536 } 00:09:21.536 ] 00:09:21.536 }' 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.536 09:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.795 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.795 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:21.795 09:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.795 09:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.795 09:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.053 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:22.053 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:22.053 09:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.053 09:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.053 [2024-11-15 09:28:10.291179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:22.053 09:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.053 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:22.053 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.053 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.053 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.053 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.053 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.053 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.053 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.053 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.053 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.053 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.053 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.053 09:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.053 09:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.053 09:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.053 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.053 "name": "Existed_Raid", 00:09:22.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.054 "strip_size_kb": 64, 00:09:22.054 "state": "configuring", 00:09:22.054 "raid_level": "concat", 00:09:22.054 "superblock": false, 00:09:22.054 "num_base_bdevs": 3, 00:09:22.054 "num_base_bdevs_discovered": 2, 00:09:22.054 "num_base_bdevs_operational": 3, 00:09:22.054 "base_bdevs_list": [ 00:09:22.054 { 00:09:22.054 "name": "BaseBdev1", 00:09:22.054 "uuid": "98c05a11-4d0b-40b0-abe2-e3fbfb688dbd", 00:09:22.054 "is_configured": true, 00:09:22.054 "data_offset": 0, 00:09:22.054 "data_size": 65536 00:09:22.054 }, 00:09:22.054 { 00:09:22.054 "name": null, 00:09:22.054 "uuid": "2ea3cb4d-234a-4e49-a20f-51200d610d04", 00:09:22.054 "is_configured": false, 00:09:22.054 "data_offset": 0, 00:09:22.054 "data_size": 65536 00:09:22.054 }, 00:09:22.054 { 00:09:22.054 "name": "BaseBdev3", 00:09:22.054 "uuid": "4718eb79-5048-4caf-af29-7357e0090b47", 00:09:22.054 "is_configured": true, 00:09:22.054 "data_offset": 0, 00:09:22.054 "data_size": 65536 00:09:22.054 } 00:09:22.054 ] 00:09:22.054 }' 00:09:22.054 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.054 09:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.622 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.622 09:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.622 09:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.623 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:22.623 09:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.623 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:22.623 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:22.623 09:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.623 09:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.623 [2024-11-15 09:28:10.834300] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:22.623 09:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.623 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:22.623 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.623 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.623 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.623 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.623 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.623 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.623 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.623 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.623 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.623 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.623 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.623 09:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.623 09:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.623 09:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.623 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.623 "name": "Existed_Raid", 00:09:22.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.623 "strip_size_kb": 64, 00:09:22.623 "state": "configuring", 00:09:22.623 "raid_level": "concat", 00:09:22.623 "superblock": false, 00:09:22.623 "num_base_bdevs": 3, 00:09:22.623 "num_base_bdevs_discovered": 1, 00:09:22.623 "num_base_bdevs_operational": 3, 00:09:22.623 "base_bdevs_list": [ 00:09:22.623 { 00:09:22.623 "name": null, 00:09:22.623 "uuid": "98c05a11-4d0b-40b0-abe2-e3fbfb688dbd", 00:09:22.623 "is_configured": false, 00:09:22.623 "data_offset": 0, 00:09:22.623 "data_size": 65536 00:09:22.623 }, 00:09:22.623 { 00:09:22.623 "name": null, 00:09:22.623 "uuid": "2ea3cb4d-234a-4e49-a20f-51200d610d04", 00:09:22.623 "is_configured": false, 00:09:22.623 "data_offset": 0, 00:09:22.623 "data_size": 65536 00:09:22.623 }, 00:09:22.623 { 00:09:22.623 "name": "BaseBdev3", 00:09:22.623 "uuid": "4718eb79-5048-4caf-af29-7357e0090b47", 00:09:22.623 "is_configured": true, 00:09:22.623 "data_offset": 0, 00:09:22.623 "data_size": 65536 00:09:22.623 } 00:09:22.623 ] 00:09:22.623 }' 00:09:22.623 09:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.623 09:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.199 [2024-11-15 09:28:11.437164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.199 09:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.199 "name": "Existed_Raid", 00:09:23.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.199 "strip_size_kb": 64, 00:09:23.199 "state": "configuring", 00:09:23.199 "raid_level": "concat", 00:09:23.199 "superblock": false, 00:09:23.199 "num_base_bdevs": 3, 00:09:23.199 "num_base_bdevs_discovered": 2, 00:09:23.199 "num_base_bdevs_operational": 3, 00:09:23.199 "base_bdevs_list": [ 00:09:23.199 { 00:09:23.199 "name": null, 00:09:23.199 "uuid": "98c05a11-4d0b-40b0-abe2-e3fbfb688dbd", 00:09:23.199 "is_configured": false, 00:09:23.199 "data_offset": 0, 00:09:23.199 "data_size": 65536 00:09:23.199 }, 00:09:23.199 { 00:09:23.199 "name": "BaseBdev2", 00:09:23.199 "uuid": "2ea3cb4d-234a-4e49-a20f-51200d610d04", 00:09:23.199 "is_configured": true, 00:09:23.199 "data_offset": 0, 00:09:23.199 "data_size": 65536 00:09:23.199 }, 00:09:23.199 { 00:09:23.199 "name": "BaseBdev3", 00:09:23.199 "uuid": "4718eb79-5048-4caf-af29-7357e0090b47", 00:09:23.199 "is_configured": true, 00:09:23.199 "data_offset": 0, 00:09:23.199 "data_size": 65536 00:09:23.199 } 00:09:23.199 ] 00:09:23.199 }' 00:09:23.200 09:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.200 09:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.470 09:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.470 09:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:23.470 09:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.470 09:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.470 09:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.729 09:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:23.730 09:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.730 09:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.730 09:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.730 09:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:23.730 09:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.730 09:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 98c05a11-4d0b-40b0-abe2-e3fbfb688dbd 00:09:23.730 09:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.730 09:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.730 [2024-11-15 09:28:12.036376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:23.730 [2024-11-15 09:28:12.036435] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:23.730 [2024-11-15 09:28:12.036445] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:23.730 [2024-11-15 09:28:12.036717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:23.730 [2024-11-15 09:28:12.036893] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:23.730 [2024-11-15 09:28:12.036905] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:23.730 [2024-11-15 09:28:12.037221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.730 NewBaseBdev 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.730 [ 00:09:23.730 { 00:09:23.730 "name": "NewBaseBdev", 00:09:23.730 "aliases": [ 00:09:23.730 "98c05a11-4d0b-40b0-abe2-e3fbfb688dbd" 00:09:23.730 ], 00:09:23.730 "product_name": "Malloc disk", 00:09:23.730 "block_size": 512, 00:09:23.730 "num_blocks": 65536, 00:09:23.730 "uuid": "98c05a11-4d0b-40b0-abe2-e3fbfb688dbd", 00:09:23.730 "assigned_rate_limits": { 00:09:23.730 "rw_ios_per_sec": 0, 00:09:23.730 "rw_mbytes_per_sec": 0, 00:09:23.730 "r_mbytes_per_sec": 0, 00:09:23.730 "w_mbytes_per_sec": 0 00:09:23.730 }, 00:09:23.730 "claimed": true, 00:09:23.730 "claim_type": "exclusive_write", 00:09:23.730 "zoned": false, 00:09:23.730 "supported_io_types": { 00:09:23.730 "read": true, 00:09:23.730 "write": true, 00:09:23.730 "unmap": true, 00:09:23.730 "flush": true, 00:09:23.730 "reset": true, 00:09:23.730 "nvme_admin": false, 00:09:23.730 "nvme_io": false, 00:09:23.730 "nvme_io_md": false, 00:09:23.730 "write_zeroes": true, 00:09:23.730 "zcopy": true, 00:09:23.730 "get_zone_info": false, 00:09:23.730 "zone_management": false, 00:09:23.730 "zone_append": false, 00:09:23.730 "compare": false, 00:09:23.730 "compare_and_write": false, 00:09:23.730 "abort": true, 00:09:23.730 "seek_hole": false, 00:09:23.730 "seek_data": false, 00:09:23.730 "copy": true, 00:09:23.730 "nvme_iov_md": false 00:09:23.730 }, 00:09:23.730 "memory_domains": [ 00:09:23.730 { 00:09:23.730 "dma_device_id": "system", 00:09:23.730 "dma_device_type": 1 00:09:23.730 }, 00:09:23.730 { 00:09:23.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.730 "dma_device_type": 2 00:09:23.730 } 00:09:23.730 ], 00:09:23.730 "driver_specific": {} 00:09:23.730 } 00:09:23.730 ] 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.730 "name": "Existed_Raid", 00:09:23.730 "uuid": "3a51a7e3-2b69-4ad0-a9c7-ea8bdf28b8ef", 00:09:23.730 "strip_size_kb": 64, 00:09:23.730 "state": "online", 00:09:23.730 "raid_level": "concat", 00:09:23.730 "superblock": false, 00:09:23.730 "num_base_bdevs": 3, 00:09:23.730 "num_base_bdevs_discovered": 3, 00:09:23.730 "num_base_bdevs_operational": 3, 00:09:23.730 "base_bdevs_list": [ 00:09:23.730 { 00:09:23.730 "name": "NewBaseBdev", 00:09:23.730 "uuid": "98c05a11-4d0b-40b0-abe2-e3fbfb688dbd", 00:09:23.730 "is_configured": true, 00:09:23.730 "data_offset": 0, 00:09:23.730 "data_size": 65536 00:09:23.730 }, 00:09:23.730 { 00:09:23.730 "name": "BaseBdev2", 00:09:23.730 "uuid": "2ea3cb4d-234a-4e49-a20f-51200d610d04", 00:09:23.730 "is_configured": true, 00:09:23.730 "data_offset": 0, 00:09:23.730 "data_size": 65536 00:09:23.730 }, 00:09:23.730 { 00:09:23.730 "name": "BaseBdev3", 00:09:23.730 "uuid": "4718eb79-5048-4caf-af29-7357e0090b47", 00:09:23.730 "is_configured": true, 00:09:23.730 "data_offset": 0, 00:09:23.730 "data_size": 65536 00:09:23.730 } 00:09:23.730 ] 00:09:23.730 }' 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.730 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.300 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:24.300 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:24.300 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:24.300 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:24.300 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:24.300 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:24.300 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:24.300 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:24.300 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.300 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.300 [2024-11-15 09:28:12.512227] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:24.300 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.300 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:24.300 "name": "Existed_Raid", 00:09:24.300 "aliases": [ 00:09:24.300 "3a51a7e3-2b69-4ad0-a9c7-ea8bdf28b8ef" 00:09:24.300 ], 00:09:24.300 "product_name": "Raid Volume", 00:09:24.300 "block_size": 512, 00:09:24.300 "num_blocks": 196608, 00:09:24.300 "uuid": "3a51a7e3-2b69-4ad0-a9c7-ea8bdf28b8ef", 00:09:24.300 "assigned_rate_limits": { 00:09:24.300 "rw_ios_per_sec": 0, 00:09:24.300 "rw_mbytes_per_sec": 0, 00:09:24.300 "r_mbytes_per_sec": 0, 00:09:24.300 "w_mbytes_per_sec": 0 00:09:24.300 }, 00:09:24.300 "claimed": false, 00:09:24.300 "zoned": false, 00:09:24.300 "supported_io_types": { 00:09:24.300 "read": true, 00:09:24.300 "write": true, 00:09:24.300 "unmap": true, 00:09:24.300 "flush": true, 00:09:24.300 "reset": true, 00:09:24.300 "nvme_admin": false, 00:09:24.300 "nvme_io": false, 00:09:24.300 "nvme_io_md": false, 00:09:24.300 "write_zeroes": true, 00:09:24.300 "zcopy": false, 00:09:24.300 "get_zone_info": false, 00:09:24.300 "zone_management": false, 00:09:24.300 "zone_append": false, 00:09:24.300 "compare": false, 00:09:24.300 "compare_and_write": false, 00:09:24.300 "abort": false, 00:09:24.300 "seek_hole": false, 00:09:24.300 "seek_data": false, 00:09:24.300 "copy": false, 00:09:24.300 "nvme_iov_md": false 00:09:24.300 }, 00:09:24.300 "memory_domains": [ 00:09:24.300 { 00:09:24.300 "dma_device_id": "system", 00:09:24.300 "dma_device_type": 1 00:09:24.300 }, 00:09:24.300 { 00:09:24.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.300 "dma_device_type": 2 00:09:24.300 }, 00:09:24.300 { 00:09:24.300 "dma_device_id": "system", 00:09:24.300 "dma_device_type": 1 00:09:24.300 }, 00:09:24.300 { 00:09:24.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.300 "dma_device_type": 2 00:09:24.300 }, 00:09:24.300 { 00:09:24.300 "dma_device_id": "system", 00:09:24.300 "dma_device_type": 1 00:09:24.300 }, 00:09:24.300 { 00:09:24.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.300 "dma_device_type": 2 00:09:24.300 } 00:09:24.300 ], 00:09:24.300 "driver_specific": { 00:09:24.300 "raid": { 00:09:24.300 "uuid": "3a51a7e3-2b69-4ad0-a9c7-ea8bdf28b8ef", 00:09:24.300 "strip_size_kb": 64, 00:09:24.300 "state": "online", 00:09:24.300 "raid_level": "concat", 00:09:24.300 "superblock": false, 00:09:24.300 "num_base_bdevs": 3, 00:09:24.300 "num_base_bdevs_discovered": 3, 00:09:24.300 "num_base_bdevs_operational": 3, 00:09:24.300 "base_bdevs_list": [ 00:09:24.300 { 00:09:24.300 "name": "NewBaseBdev", 00:09:24.300 "uuid": "98c05a11-4d0b-40b0-abe2-e3fbfb688dbd", 00:09:24.300 "is_configured": true, 00:09:24.300 "data_offset": 0, 00:09:24.300 "data_size": 65536 00:09:24.300 }, 00:09:24.300 { 00:09:24.300 "name": "BaseBdev2", 00:09:24.300 "uuid": "2ea3cb4d-234a-4e49-a20f-51200d610d04", 00:09:24.300 "is_configured": true, 00:09:24.300 "data_offset": 0, 00:09:24.300 "data_size": 65536 00:09:24.301 }, 00:09:24.301 { 00:09:24.301 "name": "BaseBdev3", 00:09:24.301 "uuid": "4718eb79-5048-4caf-af29-7357e0090b47", 00:09:24.301 "is_configured": true, 00:09:24.301 "data_offset": 0, 00:09:24.301 "data_size": 65536 00:09:24.301 } 00:09:24.301 ] 00:09:24.301 } 00:09:24.301 } 00:09:24.301 }' 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:24.301 BaseBdev2 00:09:24.301 BaseBdev3' 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.301 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.560 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.560 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.560 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:24.560 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.560 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.560 [2024-11-15 09:28:12.791369] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:24.560 [2024-11-15 09:28:12.791409] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:24.560 [2024-11-15 09:28:12.791516] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:24.560 [2024-11-15 09:28:12.791577] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:24.560 [2024-11-15 09:28:12.791591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:24.560 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.560 09:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65919 00:09:24.560 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 65919 ']' 00:09:24.560 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 65919 00:09:24.560 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:09:24.560 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:24.560 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65919 00:09:24.560 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:24.560 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:24.560 killing process with pid 65919 00:09:24.560 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65919' 00:09:24.560 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 65919 00:09:24.560 [2024-11-15 09:28:12.840163] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:24.560 09:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 65919 00:09:24.818 [2024-11-15 09:28:13.167969] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:26.193 ************************************ 00:09:26.193 END TEST raid_state_function_test 00:09:26.193 ************************************ 00:09:26.193 00:09:26.193 real 0m11.331s 00:09:26.193 user 0m17.933s 00:09:26.193 sys 0m2.082s 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.193 09:28:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:26.193 09:28:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:26.193 09:28:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:26.193 09:28:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:26.193 ************************************ 00:09:26.193 START TEST raid_state_function_test_sb 00:09:26.193 ************************************ 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 true 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:26.193 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:26.194 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66546 00:09:26.194 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:26.194 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66546' 00:09:26.194 Process raid pid: 66546 00:09:26.194 09:28:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66546 00:09:26.194 09:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 66546 ']' 00:09:26.194 09:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.194 09:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:26.194 09:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.194 09:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:26.194 09:28:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.194 [2024-11-15 09:28:14.569846] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:09:26.194 [2024-11-15 09:28:14.570145] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.454 [2024-11-15 09:28:14.745243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.454 [2024-11-15 09:28:14.866359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.712 [2024-11-15 09:28:15.091742] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.712 [2024-11-15 09:28:15.091889] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.973 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:26.973 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:09:26.973 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:26.973 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.973 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.973 [2024-11-15 09:28:15.417466] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:26.973 [2024-11-15 09:28:15.417635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:26.973 [2024-11-15 09:28:15.417652] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:26.973 [2024-11-15 09:28:15.417663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:26.973 [2024-11-15 09:28:15.417670] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:26.973 [2024-11-15 09:28:15.417682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:26.973 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.973 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:26.973 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.973 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.973 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.973 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.973 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.973 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.973 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.973 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.973 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.973 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.973 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.973 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.973 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.295 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.295 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.295 "name": "Existed_Raid", 00:09:27.295 "uuid": "1798618f-3e38-43bd-842f-7dd3fb0bfc2e", 00:09:27.295 "strip_size_kb": 64, 00:09:27.295 "state": "configuring", 00:09:27.295 "raid_level": "concat", 00:09:27.295 "superblock": true, 00:09:27.295 "num_base_bdevs": 3, 00:09:27.295 "num_base_bdevs_discovered": 0, 00:09:27.295 "num_base_bdevs_operational": 3, 00:09:27.295 "base_bdevs_list": [ 00:09:27.295 { 00:09:27.295 "name": "BaseBdev1", 00:09:27.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.295 "is_configured": false, 00:09:27.295 "data_offset": 0, 00:09:27.295 "data_size": 0 00:09:27.295 }, 00:09:27.295 { 00:09:27.295 "name": "BaseBdev2", 00:09:27.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.295 "is_configured": false, 00:09:27.295 "data_offset": 0, 00:09:27.295 "data_size": 0 00:09:27.295 }, 00:09:27.295 { 00:09:27.295 "name": "BaseBdev3", 00:09:27.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.295 "is_configured": false, 00:09:27.295 "data_offset": 0, 00:09:27.295 "data_size": 0 00:09:27.295 } 00:09:27.295 ] 00:09:27.295 }' 00:09:27.295 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.295 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.555 [2024-11-15 09:28:15.888915] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:27.555 [2024-11-15 09:28:15.889188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.555 [2024-11-15 09:28:15.900750] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:27.555 [2024-11-15 09:28:15.900985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:27.555 [2024-11-15 09:28:15.901035] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:27.555 [2024-11-15 09:28:15.901074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:27.555 [2024-11-15 09:28:15.901093] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:27.555 [2024-11-15 09:28:15.901117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.555 [2024-11-15 09:28:15.957742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.555 BaseBdev1 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.555 [ 00:09:27.555 { 00:09:27.555 "name": "BaseBdev1", 00:09:27.555 "aliases": [ 00:09:27.555 "034145ef-e756-41e0-ba7b-3c4e294a5131" 00:09:27.555 ], 00:09:27.555 "product_name": "Malloc disk", 00:09:27.555 "block_size": 512, 00:09:27.555 "num_blocks": 65536, 00:09:27.555 "uuid": "034145ef-e756-41e0-ba7b-3c4e294a5131", 00:09:27.555 "assigned_rate_limits": { 00:09:27.555 "rw_ios_per_sec": 0, 00:09:27.555 "rw_mbytes_per_sec": 0, 00:09:27.555 "r_mbytes_per_sec": 0, 00:09:27.555 "w_mbytes_per_sec": 0 00:09:27.555 }, 00:09:27.555 "claimed": true, 00:09:27.555 "claim_type": "exclusive_write", 00:09:27.555 "zoned": false, 00:09:27.555 "supported_io_types": { 00:09:27.555 "read": true, 00:09:27.555 "write": true, 00:09:27.555 "unmap": true, 00:09:27.555 "flush": true, 00:09:27.555 "reset": true, 00:09:27.555 "nvme_admin": false, 00:09:27.555 "nvme_io": false, 00:09:27.555 "nvme_io_md": false, 00:09:27.555 "write_zeroes": true, 00:09:27.555 "zcopy": true, 00:09:27.555 "get_zone_info": false, 00:09:27.555 "zone_management": false, 00:09:27.555 "zone_append": false, 00:09:27.555 "compare": false, 00:09:27.555 "compare_and_write": false, 00:09:27.555 "abort": true, 00:09:27.555 "seek_hole": false, 00:09:27.555 "seek_data": false, 00:09:27.555 "copy": true, 00:09:27.555 "nvme_iov_md": false 00:09:27.555 }, 00:09:27.555 "memory_domains": [ 00:09:27.555 { 00:09:27.555 "dma_device_id": "system", 00:09:27.555 "dma_device_type": 1 00:09:27.555 }, 00:09:27.555 { 00:09:27.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.555 "dma_device_type": 2 00:09:27.555 } 00:09:27.555 ], 00:09:27.555 "driver_specific": {} 00:09:27.555 } 00:09:27.555 ] 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.555 09:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.555 09:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.555 09:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.555 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.555 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.814 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.814 09:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.814 "name": "Existed_Raid", 00:09:27.814 "uuid": "a2afdf36-92e8-4c6b-b33a-68ef13a6d718", 00:09:27.814 "strip_size_kb": 64, 00:09:27.814 "state": "configuring", 00:09:27.814 "raid_level": "concat", 00:09:27.814 "superblock": true, 00:09:27.814 "num_base_bdevs": 3, 00:09:27.814 "num_base_bdevs_discovered": 1, 00:09:27.814 "num_base_bdevs_operational": 3, 00:09:27.814 "base_bdevs_list": [ 00:09:27.814 { 00:09:27.814 "name": "BaseBdev1", 00:09:27.814 "uuid": "034145ef-e756-41e0-ba7b-3c4e294a5131", 00:09:27.814 "is_configured": true, 00:09:27.814 "data_offset": 2048, 00:09:27.814 "data_size": 63488 00:09:27.814 }, 00:09:27.814 { 00:09:27.814 "name": "BaseBdev2", 00:09:27.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.814 "is_configured": false, 00:09:27.814 "data_offset": 0, 00:09:27.814 "data_size": 0 00:09:27.814 }, 00:09:27.814 { 00:09:27.814 "name": "BaseBdev3", 00:09:27.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.814 "is_configured": false, 00:09:27.814 "data_offset": 0, 00:09:27.814 "data_size": 0 00:09:27.814 } 00:09:27.814 ] 00:09:27.814 }' 00:09:27.814 09:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.814 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.073 [2024-11-15 09:28:16.441032] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:28.073 [2024-11-15 09:28:16.441194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.073 [2024-11-15 09:28:16.453107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.073 [2024-11-15 09:28:16.455328] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:28.073 [2024-11-15 09:28:16.455409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:28.073 [2024-11-15 09:28:16.455449] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:28.073 [2024-11-15 09:28:16.455473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.073 "name": "Existed_Raid", 00:09:28.073 "uuid": "525d7cfa-13ff-4e3b-9a62-463d273313e9", 00:09:28.073 "strip_size_kb": 64, 00:09:28.073 "state": "configuring", 00:09:28.073 "raid_level": "concat", 00:09:28.073 "superblock": true, 00:09:28.073 "num_base_bdevs": 3, 00:09:28.073 "num_base_bdevs_discovered": 1, 00:09:28.073 "num_base_bdevs_operational": 3, 00:09:28.073 "base_bdevs_list": [ 00:09:28.073 { 00:09:28.073 "name": "BaseBdev1", 00:09:28.073 "uuid": "034145ef-e756-41e0-ba7b-3c4e294a5131", 00:09:28.073 "is_configured": true, 00:09:28.073 "data_offset": 2048, 00:09:28.073 "data_size": 63488 00:09:28.073 }, 00:09:28.073 { 00:09:28.073 "name": "BaseBdev2", 00:09:28.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.073 "is_configured": false, 00:09:28.073 "data_offset": 0, 00:09:28.073 "data_size": 0 00:09:28.073 }, 00:09:28.073 { 00:09:28.073 "name": "BaseBdev3", 00:09:28.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.073 "is_configured": false, 00:09:28.073 "data_offset": 0, 00:09:28.073 "data_size": 0 00:09:28.073 } 00:09:28.073 ] 00:09:28.073 }' 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.073 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.641 09:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:28.641 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.641 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.641 [2024-11-15 09:28:16.966546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.641 BaseBdev2 00:09:28.641 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.641 09:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:28.641 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:28.641 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:28.641 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:28.641 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:28.641 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:28.641 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:28.641 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.641 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.641 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.641 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:28.641 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.641 09:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.641 [ 00:09:28.641 { 00:09:28.641 "name": "BaseBdev2", 00:09:28.641 "aliases": [ 00:09:28.641 "5e17e09b-b56d-432b-af8a-cd1241f270fd" 00:09:28.641 ], 00:09:28.641 "product_name": "Malloc disk", 00:09:28.641 "block_size": 512, 00:09:28.641 "num_blocks": 65536, 00:09:28.641 "uuid": "5e17e09b-b56d-432b-af8a-cd1241f270fd", 00:09:28.641 "assigned_rate_limits": { 00:09:28.641 "rw_ios_per_sec": 0, 00:09:28.641 "rw_mbytes_per_sec": 0, 00:09:28.641 "r_mbytes_per_sec": 0, 00:09:28.641 "w_mbytes_per_sec": 0 00:09:28.641 }, 00:09:28.641 "claimed": true, 00:09:28.641 "claim_type": "exclusive_write", 00:09:28.641 "zoned": false, 00:09:28.641 "supported_io_types": { 00:09:28.641 "read": true, 00:09:28.641 "write": true, 00:09:28.641 "unmap": true, 00:09:28.641 "flush": true, 00:09:28.641 "reset": true, 00:09:28.641 "nvme_admin": false, 00:09:28.641 "nvme_io": false, 00:09:28.641 "nvme_io_md": false, 00:09:28.641 "write_zeroes": true, 00:09:28.641 "zcopy": true, 00:09:28.641 "get_zone_info": false, 00:09:28.641 "zone_management": false, 00:09:28.641 "zone_append": false, 00:09:28.641 "compare": false, 00:09:28.641 "compare_and_write": false, 00:09:28.641 "abort": true, 00:09:28.641 "seek_hole": false, 00:09:28.641 "seek_data": false, 00:09:28.641 "copy": true, 00:09:28.641 "nvme_iov_md": false 00:09:28.641 }, 00:09:28.641 "memory_domains": [ 00:09:28.641 { 00:09:28.641 "dma_device_id": "system", 00:09:28.641 "dma_device_type": 1 00:09:28.641 }, 00:09:28.641 { 00:09:28.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.641 "dma_device_type": 2 00:09:28.641 } 00:09:28.641 ], 00:09:28.641 "driver_specific": {} 00:09:28.641 } 00:09:28.641 ] 00:09:28.641 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.641 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:28.641 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:28.641 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:28.641 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:28.641 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.641 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.641 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.641 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.641 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.641 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.641 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.641 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.641 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.641 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.641 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.641 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.641 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.641 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.641 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.641 "name": "Existed_Raid", 00:09:28.641 "uuid": "525d7cfa-13ff-4e3b-9a62-463d273313e9", 00:09:28.641 "strip_size_kb": 64, 00:09:28.641 "state": "configuring", 00:09:28.641 "raid_level": "concat", 00:09:28.641 "superblock": true, 00:09:28.641 "num_base_bdevs": 3, 00:09:28.641 "num_base_bdevs_discovered": 2, 00:09:28.641 "num_base_bdevs_operational": 3, 00:09:28.641 "base_bdevs_list": [ 00:09:28.641 { 00:09:28.641 "name": "BaseBdev1", 00:09:28.641 "uuid": "034145ef-e756-41e0-ba7b-3c4e294a5131", 00:09:28.641 "is_configured": true, 00:09:28.641 "data_offset": 2048, 00:09:28.641 "data_size": 63488 00:09:28.641 }, 00:09:28.641 { 00:09:28.641 "name": "BaseBdev2", 00:09:28.641 "uuid": "5e17e09b-b56d-432b-af8a-cd1241f270fd", 00:09:28.641 "is_configured": true, 00:09:28.641 "data_offset": 2048, 00:09:28.641 "data_size": 63488 00:09:28.641 }, 00:09:28.641 { 00:09:28.641 "name": "BaseBdev3", 00:09:28.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.641 "is_configured": false, 00:09:28.641 "data_offset": 0, 00:09:28.641 "data_size": 0 00:09:28.641 } 00:09:28.641 ] 00:09:28.641 }' 00:09:28.641 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.641 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.209 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:29.209 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.209 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.209 [2024-11-15 09:28:17.548688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:29.209 [2024-11-15 09:28:17.549199] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:29.209 [2024-11-15 09:28:17.549272] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:29.209 [2024-11-15 09:28:17.549641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:29.209 [2024-11-15 09:28:17.549890] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:29.209 BaseBdev3 00:09:29.209 [2024-11-15 09:28:17.549946] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:29.209 [2024-11-15 09:28:17.550161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.209 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.209 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:29.209 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:29.209 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:29.209 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:29.209 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:29.209 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:29.209 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:29.209 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.209 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.209 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.209 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:29.209 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.209 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.209 [ 00:09:29.209 { 00:09:29.209 "name": "BaseBdev3", 00:09:29.209 "aliases": [ 00:09:29.209 "417315a7-574c-482b-bebc-5bb370e6ff9c" 00:09:29.209 ], 00:09:29.209 "product_name": "Malloc disk", 00:09:29.209 "block_size": 512, 00:09:29.209 "num_blocks": 65536, 00:09:29.209 "uuid": "417315a7-574c-482b-bebc-5bb370e6ff9c", 00:09:29.209 "assigned_rate_limits": { 00:09:29.209 "rw_ios_per_sec": 0, 00:09:29.209 "rw_mbytes_per_sec": 0, 00:09:29.209 "r_mbytes_per_sec": 0, 00:09:29.209 "w_mbytes_per_sec": 0 00:09:29.209 }, 00:09:29.209 "claimed": true, 00:09:29.209 "claim_type": "exclusive_write", 00:09:29.209 "zoned": false, 00:09:29.209 "supported_io_types": { 00:09:29.209 "read": true, 00:09:29.209 "write": true, 00:09:29.209 "unmap": true, 00:09:29.209 "flush": true, 00:09:29.209 "reset": true, 00:09:29.209 "nvme_admin": false, 00:09:29.209 "nvme_io": false, 00:09:29.209 "nvme_io_md": false, 00:09:29.209 "write_zeroes": true, 00:09:29.209 "zcopy": true, 00:09:29.209 "get_zone_info": false, 00:09:29.209 "zone_management": false, 00:09:29.210 "zone_append": false, 00:09:29.210 "compare": false, 00:09:29.210 "compare_and_write": false, 00:09:29.210 "abort": true, 00:09:29.210 "seek_hole": false, 00:09:29.210 "seek_data": false, 00:09:29.210 "copy": true, 00:09:29.210 "nvme_iov_md": false 00:09:29.210 }, 00:09:29.210 "memory_domains": [ 00:09:29.210 { 00:09:29.210 "dma_device_id": "system", 00:09:29.210 "dma_device_type": 1 00:09:29.210 }, 00:09:29.210 { 00:09:29.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.210 "dma_device_type": 2 00:09:29.210 } 00:09:29.210 ], 00:09:29.210 "driver_specific": {} 00:09:29.210 } 00:09:29.210 ] 00:09:29.210 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.210 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:29.210 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:29.210 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:29.210 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:29.210 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.210 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.210 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.210 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.210 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.210 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.210 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.210 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.210 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.210 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.210 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.210 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.210 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.210 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.210 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.210 "name": "Existed_Raid", 00:09:29.210 "uuid": "525d7cfa-13ff-4e3b-9a62-463d273313e9", 00:09:29.210 "strip_size_kb": 64, 00:09:29.210 "state": "online", 00:09:29.210 "raid_level": "concat", 00:09:29.210 "superblock": true, 00:09:29.210 "num_base_bdevs": 3, 00:09:29.210 "num_base_bdevs_discovered": 3, 00:09:29.210 "num_base_bdevs_operational": 3, 00:09:29.210 "base_bdevs_list": [ 00:09:29.210 { 00:09:29.210 "name": "BaseBdev1", 00:09:29.210 "uuid": "034145ef-e756-41e0-ba7b-3c4e294a5131", 00:09:29.210 "is_configured": true, 00:09:29.210 "data_offset": 2048, 00:09:29.210 "data_size": 63488 00:09:29.210 }, 00:09:29.210 { 00:09:29.210 "name": "BaseBdev2", 00:09:29.210 "uuid": "5e17e09b-b56d-432b-af8a-cd1241f270fd", 00:09:29.210 "is_configured": true, 00:09:29.210 "data_offset": 2048, 00:09:29.210 "data_size": 63488 00:09:29.210 }, 00:09:29.210 { 00:09:29.210 "name": "BaseBdev3", 00:09:29.210 "uuid": "417315a7-574c-482b-bebc-5bb370e6ff9c", 00:09:29.210 "is_configured": true, 00:09:29.210 "data_offset": 2048, 00:09:29.210 "data_size": 63488 00:09:29.210 } 00:09:29.210 ] 00:09:29.210 }' 00:09:29.210 09:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.210 09:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.775 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:29.775 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:29.775 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:29.775 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:29.775 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:29.775 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:29.775 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:29.775 09:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.775 09:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.775 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:29.775 [2024-11-15 09:28:18.068359] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.775 09:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.775 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:29.775 "name": "Existed_Raid", 00:09:29.775 "aliases": [ 00:09:29.775 "525d7cfa-13ff-4e3b-9a62-463d273313e9" 00:09:29.775 ], 00:09:29.775 "product_name": "Raid Volume", 00:09:29.775 "block_size": 512, 00:09:29.775 "num_blocks": 190464, 00:09:29.775 "uuid": "525d7cfa-13ff-4e3b-9a62-463d273313e9", 00:09:29.775 "assigned_rate_limits": { 00:09:29.775 "rw_ios_per_sec": 0, 00:09:29.775 "rw_mbytes_per_sec": 0, 00:09:29.775 "r_mbytes_per_sec": 0, 00:09:29.775 "w_mbytes_per_sec": 0 00:09:29.775 }, 00:09:29.775 "claimed": false, 00:09:29.775 "zoned": false, 00:09:29.775 "supported_io_types": { 00:09:29.775 "read": true, 00:09:29.775 "write": true, 00:09:29.775 "unmap": true, 00:09:29.775 "flush": true, 00:09:29.775 "reset": true, 00:09:29.775 "nvme_admin": false, 00:09:29.775 "nvme_io": false, 00:09:29.775 "nvme_io_md": false, 00:09:29.775 "write_zeroes": true, 00:09:29.775 "zcopy": false, 00:09:29.775 "get_zone_info": false, 00:09:29.775 "zone_management": false, 00:09:29.775 "zone_append": false, 00:09:29.775 "compare": false, 00:09:29.775 "compare_and_write": false, 00:09:29.775 "abort": false, 00:09:29.775 "seek_hole": false, 00:09:29.775 "seek_data": false, 00:09:29.775 "copy": false, 00:09:29.775 "nvme_iov_md": false 00:09:29.775 }, 00:09:29.775 "memory_domains": [ 00:09:29.775 { 00:09:29.775 "dma_device_id": "system", 00:09:29.775 "dma_device_type": 1 00:09:29.775 }, 00:09:29.775 { 00:09:29.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.775 "dma_device_type": 2 00:09:29.775 }, 00:09:29.775 { 00:09:29.775 "dma_device_id": "system", 00:09:29.775 "dma_device_type": 1 00:09:29.775 }, 00:09:29.775 { 00:09:29.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.776 "dma_device_type": 2 00:09:29.776 }, 00:09:29.776 { 00:09:29.776 "dma_device_id": "system", 00:09:29.776 "dma_device_type": 1 00:09:29.776 }, 00:09:29.776 { 00:09:29.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.776 "dma_device_type": 2 00:09:29.776 } 00:09:29.776 ], 00:09:29.776 "driver_specific": { 00:09:29.776 "raid": { 00:09:29.776 "uuid": "525d7cfa-13ff-4e3b-9a62-463d273313e9", 00:09:29.776 "strip_size_kb": 64, 00:09:29.776 "state": "online", 00:09:29.776 "raid_level": "concat", 00:09:29.776 "superblock": true, 00:09:29.776 "num_base_bdevs": 3, 00:09:29.776 "num_base_bdevs_discovered": 3, 00:09:29.776 "num_base_bdevs_operational": 3, 00:09:29.776 "base_bdevs_list": [ 00:09:29.776 { 00:09:29.776 "name": "BaseBdev1", 00:09:29.776 "uuid": "034145ef-e756-41e0-ba7b-3c4e294a5131", 00:09:29.776 "is_configured": true, 00:09:29.776 "data_offset": 2048, 00:09:29.776 "data_size": 63488 00:09:29.776 }, 00:09:29.776 { 00:09:29.776 "name": "BaseBdev2", 00:09:29.776 "uuid": "5e17e09b-b56d-432b-af8a-cd1241f270fd", 00:09:29.776 "is_configured": true, 00:09:29.776 "data_offset": 2048, 00:09:29.776 "data_size": 63488 00:09:29.776 }, 00:09:29.776 { 00:09:29.776 "name": "BaseBdev3", 00:09:29.776 "uuid": "417315a7-574c-482b-bebc-5bb370e6ff9c", 00:09:29.776 "is_configured": true, 00:09:29.776 "data_offset": 2048, 00:09:29.776 "data_size": 63488 00:09:29.776 } 00:09:29.776 ] 00:09:29.776 } 00:09:29.776 } 00:09:29.776 }' 00:09:29.776 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:29.776 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:29.776 BaseBdev2 00:09:29.776 BaseBdev3' 00:09:29.776 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.776 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:29.776 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.776 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.776 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:29.776 09:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.776 09:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.776 09:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.034 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.034 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.034 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.034 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.034 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:30.034 09:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.034 09:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.034 09:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.034 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.034 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.034 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.034 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:30.034 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.034 09:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.034 09:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.034 09:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.034 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.034 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.034 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:30.034 09:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.034 09:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.034 [2024-11-15 09:28:18.327638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:30.034 [2024-11-15 09:28:18.327680] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.034 [2024-11-15 09:28:18.327746] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.034 09:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.034 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:30.035 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:30.035 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:30.035 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:30.035 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:30.035 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:30.035 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.035 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:30.035 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.035 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.035 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:30.035 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.035 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.035 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.035 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.035 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.035 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.035 09:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.035 09:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.035 09:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.035 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.035 "name": "Existed_Raid", 00:09:30.035 "uuid": "525d7cfa-13ff-4e3b-9a62-463d273313e9", 00:09:30.035 "strip_size_kb": 64, 00:09:30.035 "state": "offline", 00:09:30.035 "raid_level": "concat", 00:09:30.035 "superblock": true, 00:09:30.035 "num_base_bdevs": 3, 00:09:30.035 "num_base_bdevs_discovered": 2, 00:09:30.035 "num_base_bdevs_operational": 2, 00:09:30.035 "base_bdevs_list": [ 00:09:30.035 { 00:09:30.035 "name": null, 00:09:30.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.035 "is_configured": false, 00:09:30.035 "data_offset": 0, 00:09:30.035 "data_size": 63488 00:09:30.035 }, 00:09:30.035 { 00:09:30.035 "name": "BaseBdev2", 00:09:30.035 "uuid": "5e17e09b-b56d-432b-af8a-cd1241f270fd", 00:09:30.035 "is_configured": true, 00:09:30.035 "data_offset": 2048, 00:09:30.035 "data_size": 63488 00:09:30.035 }, 00:09:30.035 { 00:09:30.035 "name": "BaseBdev3", 00:09:30.035 "uuid": "417315a7-574c-482b-bebc-5bb370e6ff9c", 00:09:30.035 "is_configured": true, 00:09:30.035 "data_offset": 2048, 00:09:30.035 "data_size": 63488 00:09:30.035 } 00:09:30.035 ] 00:09:30.035 }' 00:09:30.035 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.035 09:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.600 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:30.600 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:30.600 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.600 09:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.600 09:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.600 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:30.600 09:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.600 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:30.600 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:30.600 09:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:30.600 09:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.600 09:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.600 [2024-11-15 09:28:18.894211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:30.600 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.600 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:30.600 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:30.600 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.600 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:30.601 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.601 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.601 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.601 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:30.601 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:30.601 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:30.601 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.601 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.601 [2024-11-15 09:28:19.063756] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:30.601 [2024-11-15 09:28:19.063834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.859 BaseBdev2 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.859 [ 00:09:30.859 { 00:09:30.859 "name": "BaseBdev2", 00:09:30.859 "aliases": [ 00:09:30.859 "d475dd0e-a287-49d1-9555-55e719802c83" 00:09:30.859 ], 00:09:30.859 "product_name": "Malloc disk", 00:09:30.859 "block_size": 512, 00:09:30.859 "num_blocks": 65536, 00:09:30.859 "uuid": "d475dd0e-a287-49d1-9555-55e719802c83", 00:09:30.859 "assigned_rate_limits": { 00:09:30.859 "rw_ios_per_sec": 0, 00:09:30.859 "rw_mbytes_per_sec": 0, 00:09:30.859 "r_mbytes_per_sec": 0, 00:09:30.859 "w_mbytes_per_sec": 0 00:09:30.859 }, 00:09:30.859 "claimed": false, 00:09:30.859 "zoned": false, 00:09:30.859 "supported_io_types": { 00:09:30.859 "read": true, 00:09:30.859 "write": true, 00:09:30.859 "unmap": true, 00:09:30.859 "flush": true, 00:09:30.859 "reset": true, 00:09:30.859 "nvme_admin": false, 00:09:30.859 "nvme_io": false, 00:09:30.859 "nvme_io_md": false, 00:09:30.859 "write_zeroes": true, 00:09:30.859 "zcopy": true, 00:09:30.859 "get_zone_info": false, 00:09:30.859 "zone_management": false, 00:09:30.859 "zone_append": false, 00:09:30.859 "compare": false, 00:09:30.859 "compare_and_write": false, 00:09:30.859 "abort": true, 00:09:30.859 "seek_hole": false, 00:09:30.859 "seek_data": false, 00:09:30.859 "copy": true, 00:09:30.859 "nvme_iov_md": false 00:09:30.859 }, 00:09:30.859 "memory_domains": [ 00:09:30.859 { 00:09:30.859 "dma_device_id": "system", 00:09:30.859 "dma_device_type": 1 00:09:30.859 }, 00:09:30.859 { 00:09:30.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.859 "dma_device_type": 2 00:09:30.859 } 00:09:30.859 ], 00:09:30.859 "driver_specific": {} 00:09:30.859 } 00:09:30.859 ] 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.859 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.119 BaseBdev3 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.119 [ 00:09:31.119 { 00:09:31.119 "name": "BaseBdev3", 00:09:31.119 "aliases": [ 00:09:31.119 "d0e84c0a-b91b-4d0f-940e-8c10d2991e44" 00:09:31.119 ], 00:09:31.119 "product_name": "Malloc disk", 00:09:31.119 "block_size": 512, 00:09:31.119 "num_blocks": 65536, 00:09:31.119 "uuid": "d0e84c0a-b91b-4d0f-940e-8c10d2991e44", 00:09:31.119 "assigned_rate_limits": { 00:09:31.119 "rw_ios_per_sec": 0, 00:09:31.119 "rw_mbytes_per_sec": 0, 00:09:31.119 "r_mbytes_per_sec": 0, 00:09:31.119 "w_mbytes_per_sec": 0 00:09:31.119 }, 00:09:31.119 "claimed": false, 00:09:31.119 "zoned": false, 00:09:31.119 "supported_io_types": { 00:09:31.119 "read": true, 00:09:31.119 "write": true, 00:09:31.119 "unmap": true, 00:09:31.119 "flush": true, 00:09:31.119 "reset": true, 00:09:31.119 "nvme_admin": false, 00:09:31.119 "nvme_io": false, 00:09:31.119 "nvme_io_md": false, 00:09:31.119 "write_zeroes": true, 00:09:31.119 "zcopy": true, 00:09:31.119 "get_zone_info": false, 00:09:31.119 "zone_management": false, 00:09:31.119 "zone_append": false, 00:09:31.119 "compare": false, 00:09:31.119 "compare_and_write": false, 00:09:31.119 "abort": true, 00:09:31.119 "seek_hole": false, 00:09:31.119 "seek_data": false, 00:09:31.119 "copy": true, 00:09:31.119 "nvme_iov_md": false 00:09:31.119 }, 00:09:31.119 "memory_domains": [ 00:09:31.119 { 00:09:31.119 "dma_device_id": "system", 00:09:31.119 "dma_device_type": 1 00:09:31.119 }, 00:09:31.119 { 00:09:31.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.119 "dma_device_type": 2 00:09:31.119 } 00:09:31.119 ], 00:09:31.119 "driver_specific": {} 00:09:31.119 } 00:09:31.119 ] 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.119 [2024-11-15 09:28:19.416633] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:31.119 [2024-11-15 09:28:19.416762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:31.119 [2024-11-15 09:28:19.416819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:31.119 [2024-11-15 09:28:19.419238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.119 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.120 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.120 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.120 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.120 "name": "Existed_Raid", 00:09:31.120 "uuid": "c349ebf7-2fc2-4ce7-b7d6-115634bdd174", 00:09:31.120 "strip_size_kb": 64, 00:09:31.120 "state": "configuring", 00:09:31.120 "raid_level": "concat", 00:09:31.120 "superblock": true, 00:09:31.120 "num_base_bdevs": 3, 00:09:31.120 "num_base_bdevs_discovered": 2, 00:09:31.120 "num_base_bdevs_operational": 3, 00:09:31.120 "base_bdevs_list": [ 00:09:31.120 { 00:09:31.120 "name": "BaseBdev1", 00:09:31.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.120 "is_configured": false, 00:09:31.120 "data_offset": 0, 00:09:31.120 "data_size": 0 00:09:31.120 }, 00:09:31.120 { 00:09:31.120 "name": "BaseBdev2", 00:09:31.120 "uuid": "d475dd0e-a287-49d1-9555-55e719802c83", 00:09:31.120 "is_configured": true, 00:09:31.120 "data_offset": 2048, 00:09:31.120 "data_size": 63488 00:09:31.120 }, 00:09:31.120 { 00:09:31.120 "name": "BaseBdev3", 00:09:31.120 "uuid": "d0e84c0a-b91b-4d0f-940e-8c10d2991e44", 00:09:31.120 "is_configured": true, 00:09:31.120 "data_offset": 2048, 00:09:31.120 "data_size": 63488 00:09:31.120 } 00:09:31.120 ] 00:09:31.120 }' 00:09:31.120 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.120 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.716 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:31.716 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.716 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.716 [2024-11-15 09:28:19.911805] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:31.716 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.716 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:31.716 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.716 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.716 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.716 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.716 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.716 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.716 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.716 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.716 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.716 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.716 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.716 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.716 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.716 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.716 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.716 "name": "Existed_Raid", 00:09:31.716 "uuid": "c349ebf7-2fc2-4ce7-b7d6-115634bdd174", 00:09:31.716 "strip_size_kb": 64, 00:09:31.716 "state": "configuring", 00:09:31.716 "raid_level": "concat", 00:09:31.716 "superblock": true, 00:09:31.716 "num_base_bdevs": 3, 00:09:31.716 "num_base_bdevs_discovered": 1, 00:09:31.716 "num_base_bdevs_operational": 3, 00:09:31.716 "base_bdevs_list": [ 00:09:31.716 { 00:09:31.716 "name": "BaseBdev1", 00:09:31.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.716 "is_configured": false, 00:09:31.716 "data_offset": 0, 00:09:31.716 "data_size": 0 00:09:31.716 }, 00:09:31.716 { 00:09:31.716 "name": null, 00:09:31.716 "uuid": "d475dd0e-a287-49d1-9555-55e719802c83", 00:09:31.716 "is_configured": false, 00:09:31.716 "data_offset": 0, 00:09:31.716 "data_size": 63488 00:09:31.716 }, 00:09:31.716 { 00:09:31.716 "name": "BaseBdev3", 00:09:31.716 "uuid": "d0e84c0a-b91b-4d0f-940e-8c10d2991e44", 00:09:31.716 "is_configured": true, 00:09:31.716 "data_offset": 2048, 00:09:31.716 "data_size": 63488 00:09:31.716 } 00:09:31.716 ] 00:09:31.716 }' 00:09:31.716 09:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.716 09:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.975 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.975 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.975 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.975 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:31.975 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.975 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:31.975 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:31.975 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.975 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.235 [2024-11-15 09:28:20.474796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:32.235 BaseBdev1 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.235 [ 00:09:32.235 { 00:09:32.235 "name": "BaseBdev1", 00:09:32.235 "aliases": [ 00:09:32.235 "2b043ff0-f114-4c1d-ba57-1bee89e471d4" 00:09:32.235 ], 00:09:32.235 "product_name": "Malloc disk", 00:09:32.235 "block_size": 512, 00:09:32.235 "num_blocks": 65536, 00:09:32.235 "uuid": "2b043ff0-f114-4c1d-ba57-1bee89e471d4", 00:09:32.235 "assigned_rate_limits": { 00:09:32.235 "rw_ios_per_sec": 0, 00:09:32.235 "rw_mbytes_per_sec": 0, 00:09:32.235 "r_mbytes_per_sec": 0, 00:09:32.235 "w_mbytes_per_sec": 0 00:09:32.235 }, 00:09:32.235 "claimed": true, 00:09:32.235 "claim_type": "exclusive_write", 00:09:32.235 "zoned": false, 00:09:32.235 "supported_io_types": { 00:09:32.235 "read": true, 00:09:32.235 "write": true, 00:09:32.235 "unmap": true, 00:09:32.235 "flush": true, 00:09:32.235 "reset": true, 00:09:32.235 "nvme_admin": false, 00:09:32.235 "nvme_io": false, 00:09:32.235 "nvme_io_md": false, 00:09:32.235 "write_zeroes": true, 00:09:32.235 "zcopy": true, 00:09:32.235 "get_zone_info": false, 00:09:32.235 "zone_management": false, 00:09:32.235 "zone_append": false, 00:09:32.235 "compare": false, 00:09:32.235 "compare_and_write": false, 00:09:32.235 "abort": true, 00:09:32.235 "seek_hole": false, 00:09:32.235 "seek_data": false, 00:09:32.235 "copy": true, 00:09:32.235 "nvme_iov_md": false 00:09:32.235 }, 00:09:32.235 "memory_domains": [ 00:09:32.235 { 00:09:32.235 "dma_device_id": "system", 00:09:32.235 "dma_device_type": 1 00:09:32.235 }, 00:09:32.235 { 00:09:32.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.235 "dma_device_type": 2 00:09:32.235 } 00:09:32.235 ], 00:09:32.235 "driver_specific": {} 00:09:32.235 } 00:09:32.235 ] 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.235 "name": "Existed_Raid", 00:09:32.235 "uuid": "c349ebf7-2fc2-4ce7-b7d6-115634bdd174", 00:09:32.235 "strip_size_kb": 64, 00:09:32.235 "state": "configuring", 00:09:32.235 "raid_level": "concat", 00:09:32.235 "superblock": true, 00:09:32.235 "num_base_bdevs": 3, 00:09:32.235 "num_base_bdevs_discovered": 2, 00:09:32.235 "num_base_bdevs_operational": 3, 00:09:32.235 "base_bdevs_list": [ 00:09:32.235 { 00:09:32.235 "name": "BaseBdev1", 00:09:32.235 "uuid": "2b043ff0-f114-4c1d-ba57-1bee89e471d4", 00:09:32.235 "is_configured": true, 00:09:32.235 "data_offset": 2048, 00:09:32.235 "data_size": 63488 00:09:32.235 }, 00:09:32.235 { 00:09:32.235 "name": null, 00:09:32.235 "uuid": "d475dd0e-a287-49d1-9555-55e719802c83", 00:09:32.235 "is_configured": false, 00:09:32.235 "data_offset": 0, 00:09:32.235 "data_size": 63488 00:09:32.235 }, 00:09:32.235 { 00:09:32.235 "name": "BaseBdev3", 00:09:32.235 "uuid": "d0e84c0a-b91b-4d0f-940e-8c10d2991e44", 00:09:32.235 "is_configured": true, 00:09:32.235 "data_offset": 2048, 00:09:32.235 "data_size": 63488 00:09:32.235 } 00:09:32.235 ] 00:09:32.235 }' 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.235 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.495 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:32.495 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.495 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.495 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.495 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.495 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:32.495 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:32.495 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.495 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.495 [2024-11-15 09:28:20.950046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:32.495 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.495 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:32.495 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.495 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.495 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.495 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.495 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.495 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.495 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.495 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.495 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.754 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.754 09:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.754 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.754 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.754 09:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.754 09:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.754 "name": "Existed_Raid", 00:09:32.754 "uuid": "c349ebf7-2fc2-4ce7-b7d6-115634bdd174", 00:09:32.754 "strip_size_kb": 64, 00:09:32.754 "state": "configuring", 00:09:32.754 "raid_level": "concat", 00:09:32.754 "superblock": true, 00:09:32.754 "num_base_bdevs": 3, 00:09:32.754 "num_base_bdevs_discovered": 1, 00:09:32.754 "num_base_bdevs_operational": 3, 00:09:32.754 "base_bdevs_list": [ 00:09:32.754 { 00:09:32.754 "name": "BaseBdev1", 00:09:32.754 "uuid": "2b043ff0-f114-4c1d-ba57-1bee89e471d4", 00:09:32.754 "is_configured": true, 00:09:32.754 "data_offset": 2048, 00:09:32.754 "data_size": 63488 00:09:32.754 }, 00:09:32.754 { 00:09:32.754 "name": null, 00:09:32.754 "uuid": "d475dd0e-a287-49d1-9555-55e719802c83", 00:09:32.754 "is_configured": false, 00:09:32.754 "data_offset": 0, 00:09:32.754 "data_size": 63488 00:09:32.754 }, 00:09:32.754 { 00:09:32.754 "name": null, 00:09:32.754 "uuid": "d0e84c0a-b91b-4d0f-940e-8c10d2991e44", 00:09:32.754 "is_configured": false, 00:09:32.754 "data_offset": 0, 00:09:32.754 "data_size": 63488 00:09:32.754 } 00:09:32.754 ] 00:09:32.754 }' 00:09:32.754 09:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.754 09:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.013 09:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.013 09:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:33.013 09:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.013 09:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.013 09:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.272 09:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:33.272 09:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:33.272 09:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.272 09:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.272 [2024-11-15 09:28:21.489190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:33.272 09:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.272 09:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.272 09:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.272 09:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.272 09:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.272 09:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.272 09:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.272 09:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.272 09:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.272 09:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.272 09:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.272 09:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.272 09:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.272 09:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.272 09:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.272 09:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.273 09:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.273 "name": "Existed_Raid", 00:09:33.273 "uuid": "c349ebf7-2fc2-4ce7-b7d6-115634bdd174", 00:09:33.273 "strip_size_kb": 64, 00:09:33.273 "state": "configuring", 00:09:33.273 "raid_level": "concat", 00:09:33.273 "superblock": true, 00:09:33.273 "num_base_bdevs": 3, 00:09:33.273 "num_base_bdevs_discovered": 2, 00:09:33.273 "num_base_bdevs_operational": 3, 00:09:33.273 "base_bdevs_list": [ 00:09:33.273 { 00:09:33.273 "name": "BaseBdev1", 00:09:33.273 "uuid": "2b043ff0-f114-4c1d-ba57-1bee89e471d4", 00:09:33.273 "is_configured": true, 00:09:33.273 "data_offset": 2048, 00:09:33.273 "data_size": 63488 00:09:33.273 }, 00:09:33.273 { 00:09:33.273 "name": null, 00:09:33.273 "uuid": "d475dd0e-a287-49d1-9555-55e719802c83", 00:09:33.273 "is_configured": false, 00:09:33.273 "data_offset": 0, 00:09:33.273 "data_size": 63488 00:09:33.273 }, 00:09:33.273 { 00:09:33.273 "name": "BaseBdev3", 00:09:33.273 "uuid": "d0e84c0a-b91b-4d0f-940e-8c10d2991e44", 00:09:33.273 "is_configured": true, 00:09:33.273 "data_offset": 2048, 00:09:33.273 "data_size": 63488 00:09:33.273 } 00:09:33.273 ] 00:09:33.273 }' 00:09:33.273 09:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.273 09:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.531 09:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.531 09:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:33.531 09:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.531 09:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.531 09:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.532 09:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:33.532 09:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:33.532 09:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.532 09:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.532 [2024-11-15 09:28:21.928483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:33.791 09:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.791 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.791 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.791 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.791 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.791 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.791 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.791 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.791 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.791 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.791 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.791 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.791 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.791 09:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.791 09:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.791 09:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.791 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.791 "name": "Existed_Raid", 00:09:33.791 "uuid": "c349ebf7-2fc2-4ce7-b7d6-115634bdd174", 00:09:33.791 "strip_size_kb": 64, 00:09:33.791 "state": "configuring", 00:09:33.791 "raid_level": "concat", 00:09:33.791 "superblock": true, 00:09:33.791 "num_base_bdevs": 3, 00:09:33.791 "num_base_bdevs_discovered": 1, 00:09:33.791 "num_base_bdevs_operational": 3, 00:09:33.791 "base_bdevs_list": [ 00:09:33.791 { 00:09:33.791 "name": null, 00:09:33.791 "uuid": "2b043ff0-f114-4c1d-ba57-1bee89e471d4", 00:09:33.791 "is_configured": false, 00:09:33.791 "data_offset": 0, 00:09:33.791 "data_size": 63488 00:09:33.791 }, 00:09:33.791 { 00:09:33.791 "name": null, 00:09:33.791 "uuid": "d475dd0e-a287-49d1-9555-55e719802c83", 00:09:33.791 "is_configured": false, 00:09:33.791 "data_offset": 0, 00:09:33.791 "data_size": 63488 00:09:33.791 }, 00:09:33.791 { 00:09:33.791 "name": "BaseBdev3", 00:09:33.791 "uuid": "d0e84c0a-b91b-4d0f-940e-8c10d2991e44", 00:09:33.791 "is_configured": true, 00:09:33.791 "data_offset": 2048, 00:09:33.791 "data_size": 63488 00:09:33.791 } 00:09:33.791 ] 00:09:33.791 }' 00:09:33.791 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.791 09:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.051 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.051 09:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.051 09:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.051 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:34.309 09:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.309 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:34.309 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:34.309 09:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.309 09:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.309 [2024-11-15 09:28:22.563703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.309 09:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.309 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:34.309 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.309 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.309 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.309 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.309 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.309 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.309 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.309 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.309 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.309 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.309 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.309 09:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.309 09:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.309 09:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.309 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.309 "name": "Existed_Raid", 00:09:34.309 "uuid": "c349ebf7-2fc2-4ce7-b7d6-115634bdd174", 00:09:34.309 "strip_size_kb": 64, 00:09:34.309 "state": "configuring", 00:09:34.309 "raid_level": "concat", 00:09:34.309 "superblock": true, 00:09:34.309 "num_base_bdevs": 3, 00:09:34.309 "num_base_bdevs_discovered": 2, 00:09:34.309 "num_base_bdevs_operational": 3, 00:09:34.309 "base_bdevs_list": [ 00:09:34.309 { 00:09:34.309 "name": null, 00:09:34.309 "uuid": "2b043ff0-f114-4c1d-ba57-1bee89e471d4", 00:09:34.309 "is_configured": false, 00:09:34.309 "data_offset": 0, 00:09:34.309 "data_size": 63488 00:09:34.309 }, 00:09:34.309 { 00:09:34.309 "name": "BaseBdev2", 00:09:34.309 "uuid": "d475dd0e-a287-49d1-9555-55e719802c83", 00:09:34.309 "is_configured": true, 00:09:34.309 "data_offset": 2048, 00:09:34.309 "data_size": 63488 00:09:34.309 }, 00:09:34.309 { 00:09:34.309 "name": "BaseBdev3", 00:09:34.309 "uuid": "d0e84c0a-b91b-4d0f-940e-8c10d2991e44", 00:09:34.309 "is_configured": true, 00:09:34.309 "data_offset": 2048, 00:09:34.309 "data_size": 63488 00:09:34.309 } 00:09:34.309 ] 00:09:34.309 }' 00:09:34.309 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.309 09:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.568 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.568 09:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.568 09:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.568 09:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:34.568 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.826 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:34.826 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.826 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.826 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.826 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:34.826 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.826 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2b043ff0-f114-4c1d-ba57-1bee89e471d4 00:09:34.826 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.826 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.826 [2024-11-15 09:28:23.134511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:34.826 [2024-11-15 09:28:23.134757] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:34.826 [2024-11-15 09:28:23.134775] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:34.826 [2024-11-15 09:28:23.135099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:34.826 [2024-11-15 09:28:23.135291] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:34.826 [2024-11-15 09:28:23.135309] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:34.826 NewBaseBdev 00:09:34.826 [2024-11-15 09:28:23.135487] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.826 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.826 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:34.826 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:34.826 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:34.826 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:34.826 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:34.826 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:34.826 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:34.826 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.827 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.827 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.827 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:34.827 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.827 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.827 [ 00:09:34.827 { 00:09:34.827 "name": "NewBaseBdev", 00:09:34.827 "aliases": [ 00:09:34.827 "2b043ff0-f114-4c1d-ba57-1bee89e471d4" 00:09:34.827 ], 00:09:34.827 "product_name": "Malloc disk", 00:09:34.827 "block_size": 512, 00:09:34.827 "num_blocks": 65536, 00:09:34.827 "uuid": "2b043ff0-f114-4c1d-ba57-1bee89e471d4", 00:09:34.827 "assigned_rate_limits": { 00:09:34.827 "rw_ios_per_sec": 0, 00:09:34.827 "rw_mbytes_per_sec": 0, 00:09:34.827 "r_mbytes_per_sec": 0, 00:09:34.827 "w_mbytes_per_sec": 0 00:09:34.827 }, 00:09:34.827 "claimed": true, 00:09:34.827 "claim_type": "exclusive_write", 00:09:34.827 "zoned": false, 00:09:34.827 "supported_io_types": { 00:09:34.827 "read": true, 00:09:34.827 "write": true, 00:09:34.827 "unmap": true, 00:09:34.827 "flush": true, 00:09:34.827 "reset": true, 00:09:34.827 "nvme_admin": false, 00:09:34.827 "nvme_io": false, 00:09:34.827 "nvme_io_md": false, 00:09:34.827 "write_zeroes": true, 00:09:34.827 "zcopy": true, 00:09:34.827 "get_zone_info": false, 00:09:34.827 "zone_management": false, 00:09:34.827 "zone_append": false, 00:09:34.827 "compare": false, 00:09:34.827 "compare_and_write": false, 00:09:34.827 "abort": true, 00:09:34.827 "seek_hole": false, 00:09:34.827 "seek_data": false, 00:09:34.827 "copy": true, 00:09:34.827 "nvme_iov_md": false 00:09:34.827 }, 00:09:34.827 "memory_domains": [ 00:09:34.827 { 00:09:34.827 "dma_device_id": "system", 00:09:34.827 "dma_device_type": 1 00:09:34.827 }, 00:09:34.827 { 00:09:34.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.827 "dma_device_type": 2 00:09:34.827 } 00:09:34.827 ], 00:09:34.827 "driver_specific": {} 00:09:34.827 } 00:09:34.827 ] 00:09:34.827 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.827 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:34.827 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:34.827 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.827 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.827 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.827 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.827 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.827 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.827 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.827 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.827 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.827 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.827 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.827 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.827 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.827 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.827 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.827 "name": "Existed_Raid", 00:09:34.827 "uuid": "c349ebf7-2fc2-4ce7-b7d6-115634bdd174", 00:09:34.827 "strip_size_kb": 64, 00:09:34.827 "state": "online", 00:09:34.827 "raid_level": "concat", 00:09:34.827 "superblock": true, 00:09:34.827 "num_base_bdevs": 3, 00:09:34.827 "num_base_bdevs_discovered": 3, 00:09:34.827 "num_base_bdevs_operational": 3, 00:09:34.827 "base_bdevs_list": [ 00:09:34.827 { 00:09:34.827 "name": "NewBaseBdev", 00:09:34.827 "uuid": "2b043ff0-f114-4c1d-ba57-1bee89e471d4", 00:09:34.827 "is_configured": true, 00:09:34.827 "data_offset": 2048, 00:09:34.827 "data_size": 63488 00:09:34.827 }, 00:09:34.827 { 00:09:34.827 "name": "BaseBdev2", 00:09:34.827 "uuid": "d475dd0e-a287-49d1-9555-55e719802c83", 00:09:34.827 "is_configured": true, 00:09:34.827 "data_offset": 2048, 00:09:34.827 "data_size": 63488 00:09:34.827 }, 00:09:34.827 { 00:09:34.827 "name": "BaseBdev3", 00:09:34.827 "uuid": "d0e84c0a-b91b-4d0f-940e-8c10d2991e44", 00:09:34.827 "is_configured": true, 00:09:34.827 "data_offset": 2048, 00:09:34.827 "data_size": 63488 00:09:34.827 } 00:09:34.827 ] 00:09:34.827 }' 00:09:34.827 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.827 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.396 [2024-11-15 09:28:23.614138] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.396 "name": "Existed_Raid", 00:09:35.396 "aliases": [ 00:09:35.396 "c349ebf7-2fc2-4ce7-b7d6-115634bdd174" 00:09:35.396 ], 00:09:35.396 "product_name": "Raid Volume", 00:09:35.396 "block_size": 512, 00:09:35.396 "num_blocks": 190464, 00:09:35.396 "uuid": "c349ebf7-2fc2-4ce7-b7d6-115634bdd174", 00:09:35.396 "assigned_rate_limits": { 00:09:35.396 "rw_ios_per_sec": 0, 00:09:35.396 "rw_mbytes_per_sec": 0, 00:09:35.396 "r_mbytes_per_sec": 0, 00:09:35.396 "w_mbytes_per_sec": 0 00:09:35.396 }, 00:09:35.396 "claimed": false, 00:09:35.396 "zoned": false, 00:09:35.396 "supported_io_types": { 00:09:35.396 "read": true, 00:09:35.396 "write": true, 00:09:35.396 "unmap": true, 00:09:35.396 "flush": true, 00:09:35.396 "reset": true, 00:09:35.396 "nvme_admin": false, 00:09:35.396 "nvme_io": false, 00:09:35.396 "nvme_io_md": false, 00:09:35.396 "write_zeroes": true, 00:09:35.396 "zcopy": false, 00:09:35.396 "get_zone_info": false, 00:09:35.396 "zone_management": false, 00:09:35.396 "zone_append": false, 00:09:35.396 "compare": false, 00:09:35.396 "compare_and_write": false, 00:09:35.396 "abort": false, 00:09:35.396 "seek_hole": false, 00:09:35.396 "seek_data": false, 00:09:35.396 "copy": false, 00:09:35.396 "nvme_iov_md": false 00:09:35.396 }, 00:09:35.396 "memory_domains": [ 00:09:35.396 { 00:09:35.396 "dma_device_id": "system", 00:09:35.396 "dma_device_type": 1 00:09:35.396 }, 00:09:35.396 { 00:09:35.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.396 "dma_device_type": 2 00:09:35.396 }, 00:09:35.396 { 00:09:35.396 "dma_device_id": "system", 00:09:35.396 "dma_device_type": 1 00:09:35.396 }, 00:09:35.396 { 00:09:35.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.396 "dma_device_type": 2 00:09:35.396 }, 00:09:35.396 { 00:09:35.396 "dma_device_id": "system", 00:09:35.396 "dma_device_type": 1 00:09:35.396 }, 00:09:35.396 { 00:09:35.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.396 "dma_device_type": 2 00:09:35.396 } 00:09:35.396 ], 00:09:35.396 "driver_specific": { 00:09:35.396 "raid": { 00:09:35.396 "uuid": "c349ebf7-2fc2-4ce7-b7d6-115634bdd174", 00:09:35.396 "strip_size_kb": 64, 00:09:35.396 "state": "online", 00:09:35.396 "raid_level": "concat", 00:09:35.396 "superblock": true, 00:09:35.396 "num_base_bdevs": 3, 00:09:35.396 "num_base_bdevs_discovered": 3, 00:09:35.396 "num_base_bdevs_operational": 3, 00:09:35.396 "base_bdevs_list": [ 00:09:35.396 { 00:09:35.396 "name": "NewBaseBdev", 00:09:35.396 "uuid": "2b043ff0-f114-4c1d-ba57-1bee89e471d4", 00:09:35.396 "is_configured": true, 00:09:35.396 "data_offset": 2048, 00:09:35.396 "data_size": 63488 00:09:35.396 }, 00:09:35.396 { 00:09:35.396 "name": "BaseBdev2", 00:09:35.396 "uuid": "d475dd0e-a287-49d1-9555-55e719802c83", 00:09:35.396 "is_configured": true, 00:09:35.396 "data_offset": 2048, 00:09:35.396 "data_size": 63488 00:09:35.396 }, 00:09:35.396 { 00:09:35.396 "name": "BaseBdev3", 00:09:35.396 "uuid": "d0e84c0a-b91b-4d0f-940e-8c10d2991e44", 00:09:35.396 "is_configured": true, 00:09:35.396 "data_offset": 2048, 00:09:35.396 "data_size": 63488 00:09:35.396 } 00:09:35.396 ] 00:09:35.396 } 00:09:35.396 } 00:09:35.396 }' 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:35.396 BaseBdev2 00:09:35.396 BaseBdev3' 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.396 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.656 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.656 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.656 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:35.656 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.656 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.656 [2024-11-15 09:28:23.865367] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:35.656 [2024-11-15 09:28:23.865404] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.656 [2024-11-15 09:28:23.865512] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.656 [2024-11-15 09:28:23.865581] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.656 [2024-11-15 09:28:23.865595] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:35.656 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.656 09:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66546 00:09:35.656 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 66546 ']' 00:09:35.656 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 66546 00:09:35.656 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:09:35.656 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:35.656 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66546 00:09:35.656 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:35.656 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:35.656 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66546' 00:09:35.656 killing process with pid 66546 00:09:35.656 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 66546 00:09:35.656 [2024-11-15 09:28:23.918436] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:35.656 09:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 66546 00:09:35.914 [2024-11-15 09:28:24.269365] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:37.293 09:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:37.293 ************************************ 00:09:37.293 END TEST raid_state_function_test_sb 00:09:37.293 ************************************ 00:09:37.293 00:09:37.293 real 0m11.207s 00:09:37.293 user 0m17.417s 00:09:37.293 sys 0m2.108s 00:09:37.293 09:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:37.293 09:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.293 09:28:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:37.293 09:28:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:37.293 09:28:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:37.293 09:28:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:37.293 ************************************ 00:09:37.293 START TEST raid_superblock_test 00:09:37.293 ************************************ 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 3 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67177 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67177 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 67177 ']' 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:37.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:37.293 09:28:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.552 [2024-11-15 09:28:25.822940] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:09:37.552 [2024-11-15 09:28:25.823180] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67177 ] 00:09:37.552 [2024-11-15 09:28:25.987624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.811 [2024-11-15 09:28:26.120751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.070 [2024-11-15 09:28:26.349317] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.070 [2024-11-15 09:28:26.349383] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.337 malloc1 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.337 [2024-11-15 09:28:26.742119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:38.337 [2024-11-15 09:28:26.742210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.337 [2024-11-15 09:28:26.742240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:38.337 [2024-11-15 09:28:26.742250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.337 [2024-11-15 09:28:26.744551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.337 [2024-11-15 09:28:26.744603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:38.337 pt1 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.337 malloc2 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.337 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.337 [2024-11-15 09:28:26.796997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:38.337 [2024-11-15 09:28:26.797157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.337 [2024-11-15 09:28:26.797221] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:38.337 [2024-11-15 09:28:26.797259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.608 [2024-11-15 09:28:26.799759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.608 [2024-11-15 09:28:26.799838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:38.608 pt2 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.608 malloc3 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.608 [2024-11-15 09:28:26.870220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:38.608 [2024-11-15 09:28:26.870381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.608 [2024-11-15 09:28:26.870432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:38.608 [2024-11-15 09:28:26.870470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.608 [2024-11-15 09:28:26.873006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.608 [2024-11-15 09:28:26.873097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:38.608 pt3 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.608 [2024-11-15 09:28:26.882270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:38.608 [2024-11-15 09:28:26.884397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:38.608 [2024-11-15 09:28:26.884476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:38.608 [2024-11-15 09:28:26.884660] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:38.608 [2024-11-15 09:28:26.884677] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:38.608 [2024-11-15 09:28:26.885003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:38.608 [2024-11-15 09:28:26.885210] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:38.608 [2024-11-15 09:28:26.885310] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:38.608 [2024-11-15 09:28:26.885499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.608 "name": "raid_bdev1", 00:09:38.608 "uuid": "77e2c7a8-dfd8-4f4f-8f65-0caa7fc5b5b4", 00:09:38.608 "strip_size_kb": 64, 00:09:38.608 "state": "online", 00:09:38.608 "raid_level": "concat", 00:09:38.608 "superblock": true, 00:09:38.608 "num_base_bdevs": 3, 00:09:38.608 "num_base_bdevs_discovered": 3, 00:09:38.608 "num_base_bdevs_operational": 3, 00:09:38.608 "base_bdevs_list": [ 00:09:38.608 { 00:09:38.608 "name": "pt1", 00:09:38.608 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:38.608 "is_configured": true, 00:09:38.608 "data_offset": 2048, 00:09:38.608 "data_size": 63488 00:09:38.608 }, 00:09:38.608 { 00:09:38.608 "name": "pt2", 00:09:38.608 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:38.608 "is_configured": true, 00:09:38.608 "data_offset": 2048, 00:09:38.608 "data_size": 63488 00:09:38.608 }, 00:09:38.608 { 00:09:38.608 "name": "pt3", 00:09:38.608 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:38.608 "is_configured": true, 00:09:38.608 "data_offset": 2048, 00:09:38.608 "data_size": 63488 00:09:38.608 } 00:09:38.608 ] 00:09:38.608 }' 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.608 09:28:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.867 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:38.867 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:38.867 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:38.867 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:38.867 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:38.867 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:38.867 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:38.868 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:38.868 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.868 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.126 [2024-11-15 09:28:27.337778] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.126 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.126 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:39.126 "name": "raid_bdev1", 00:09:39.126 "aliases": [ 00:09:39.126 "77e2c7a8-dfd8-4f4f-8f65-0caa7fc5b5b4" 00:09:39.126 ], 00:09:39.126 "product_name": "Raid Volume", 00:09:39.126 "block_size": 512, 00:09:39.126 "num_blocks": 190464, 00:09:39.126 "uuid": "77e2c7a8-dfd8-4f4f-8f65-0caa7fc5b5b4", 00:09:39.126 "assigned_rate_limits": { 00:09:39.126 "rw_ios_per_sec": 0, 00:09:39.126 "rw_mbytes_per_sec": 0, 00:09:39.126 "r_mbytes_per_sec": 0, 00:09:39.126 "w_mbytes_per_sec": 0 00:09:39.126 }, 00:09:39.126 "claimed": false, 00:09:39.126 "zoned": false, 00:09:39.126 "supported_io_types": { 00:09:39.126 "read": true, 00:09:39.126 "write": true, 00:09:39.126 "unmap": true, 00:09:39.126 "flush": true, 00:09:39.126 "reset": true, 00:09:39.126 "nvme_admin": false, 00:09:39.126 "nvme_io": false, 00:09:39.126 "nvme_io_md": false, 00:09:39.126 "write_zeroes": true, 00:09:39.126 "zcopy": false, 00:09:39.126 "get_zone_info": false, 00:09:39.126 "zone_management": false, 00:09:39.126 "zone_append": false, 00:09:39.126 "compare": false, 00:09:39.126 "compare_and_write": false, 00:09:39.126 "abort": false, 00:09:39.126 "seek_hole": false, 00:09:39.126 "seek_data": false, 00:09:39.126 "copy": false, 00:09:39.126 "nvme_iov_md": false 00:09:39.126 }, 00:09:39.126 "memory_domains": [ 00:09:39.126 { 00:09:39.126 "dma_device_id": "system", 00:09:39.126 "dma_device_type": 1 00:09:39.126 }, 00:09:39.126 { 00:09:39.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.126 "dma_device_type": 2 00:09:39.126 }, 00:09:39.126 { 00:09:39.126 "dma_device_id": "system", 00:09:39.126 "dma_device_type": 1 00:09:39.126 }, 00:09:39.126 { 00:09:39.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.126 "dma_device_type": 2 00:09:39.126 }, 00:09:39.126 { 00:09:39.126 "dma_device_id": "system", 00:09:39.126 "dma_device_type": 1 00:09:39.126 }, 00:09:39.126 { 00:09:39.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.126 "dma_device_type": 2 00:09:39.126 } 00:09:39.126 ], 00:09:39.126 "driver_specific": { 00:09:39.126 "raid": { 00:09:39.126 "uuid": "77e2c7a8-dfd8-4f4f-8f65-0caa7fc5b5b4", 00:09:39.126 "strip_size_kb": 64, 00:09:39.126 "state": "online", 00:09:39.126 "raid_level": "concat", 00:09:39.126 "superblock": true, 00:09:39.126 "num_base_bdevs": 3, 00:09:39.126 "num_base_bdevs_discovered": 3, 00:09:39.126 "num_base_bdevs_operational": 3, 00:09:39.126 "base_bdevs_list": [ 00:09:39.126 { 00:09:39.126 "name": "pt1", 00:09:39.126 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:39.126 "is_configured": true, 00:09:39.126 "data_offset": 2048, 00:09:39.126 "data_size": 63488 00:09:39.126 }, 00:09:39.126 { 00:09:39.126 "name": "pt2", 00:09:39.126 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:39.126 "is_configured": true, 00:09:39.126 "data_offset": 2048, 00:09:39.126 "data_size": 63488 00:09:39.126 }, 00:09:39.126 { 00:09:39.126 "name": "pt3", 00:09:39.126 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:39.126 "is_configured": true, 00:09:39.126 "data_offset": 2048, 00:09:39.126 "data_size": 63488 00:09:39.126 } 00:09:39.126 ] 00:09:39.126 } 00:09:39.126 } 00:09:39.126 }' 00:09:39.126 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:39.126 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:39.126 pt2 00:09:39.126 pt3' 00:09:39.126 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.127 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:39.127 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.127 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:39.127 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.127 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.127 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.127 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.127 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.127 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.127 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.127 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:39.127 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.127 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.127 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.127 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.127 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.127 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.127 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.127 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.127 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:39.127 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.127 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.127 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:39.386 [2024-11-15 09:28:27.617268] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=77e2c7a8-dfd8-4f4f-8f65-0caa7fc5b5b4 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 77e2c7a8-dfd8-4f4f-8f65-0caa7fc5b5b4 ']' 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.386 [2024-11-15 09:28:27.664904] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:39.386 [2024-11-15 09:28:27.664943] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:39.386 [2024-11-15 09:28:27.665038] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:39.386 [2024-11-15 09:28:27.665103] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:39.386 [2024-11-15 09:28:27.665114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.386 [2024-11-15 09:28:27.796733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:39.386 [2024-11-15 09:28:27.798659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:39.386 [2024-11-15 09:28:27.798716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:39.386 [2024-11-15 09:28:27.798768] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:39.386 [2024-11-15 09:28:27.798823] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:39.386 [2024-11-15 09:28:27.798843] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:39.386 [2024-11-15 09:28:27.798876] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:39.386 [2024-11-15 09:28:27.798886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:39.386 request: 00:09:39.386 { 00:09:39.386 "name": "raid_bdev1", 00:09:39.386 "raid_level": "concat", 00:09:39.386 "base_bdevs": [ 00:09:39.386 "malloc1", 00:09:39.386 "malloc2", 00:09:39.386 "malloc3" 00:09:39.386 ], 00:09:39.386 "strip_size_kb": 64, 00:09:39.386 "superblock": false, 00:09:39.386 "method": "bdev_raid_create", 00:09:39.386 "req_id": 1 00:09:39.386 } 00:09:39.386 Got JSON-RPC error response 00:09:39.386 response: 00:09:39.386 { 00:09:39.386 "code": -17, 00:09:39.386 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:39.386 } 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.386 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.387 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.387 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:39.387 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.646 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:39.646 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:39.646 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:39.646 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.646 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.646 [2024-11-15 09:28:27.860557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:39.646 [2024-11-15 09:28:27.860717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.646 [2024-11-15 09:28:27.860787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:39.646 [2024-11-15 09:28:27.860833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.646 [2024-11-15 09:28:27.863425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.646 [2024-11-15 09:28:27.863508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:39.646 [2024-11-15 09:28:27.863636] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:39.646 [2024-11-15 09:28:27.863727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:39.646 pt1 00:09:39.646 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.646 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:39.646 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.646 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.646 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.646 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.646 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.646 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.646 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.646 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.646 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.646 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.646 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.646 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.646 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.646 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.646 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.646 "name": "raid_bdev1", 00:09:39.646 "uuid": "77e2c7a8-dfd8-4f4f-8f65-0caa7fc5b5b4", 00:09:39.646 "strip_size_kb": 64, 00:09:39.646 "state": "configuring", 00:09:39.646 "raid_level": "concat", 00:09:39.646 "superblock": true, 00:09:39.646 "num_base_bdevs": 3, 00:09:39.646 "num_base_bdevs_discovered": 1, 00:09:39.646 "num_base_bdevs_operational": 3, 00:09:39.646 "base_bdevs_list": [ 00:09:39.646 { 00:09:39.646 "name": "pt1", 00:09:39.646 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:39.646 "is_configured": true, 00:09:39.646 "data_offset": 2048, 00:09:39.646 "data_size": 63488 00:09:39.646 }, 00:09:39.646 { 00:09:39.646 "name": null, 00:09:39.646 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:39.646 "is_configured": false, 00:09:39.646 "data_offset": 2048, 00:09:39.646 "data_size": 63488 00:09:39.646 }, 00:09:39.646 { 00:09:39.646 "name": null, 00:09:39.646 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:39.646 "is_configured": false, 00:09:39.646 "data_offset": 2048, 00:09:39.646 "data_size": 63488 00:09:39.646 } 00:09:39.646 ] 00:09:39.646 }' 00:09:39.646 09:28:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.646 09:28:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.905 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:39.905 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:39.905 09:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.905 09:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.905 [2024-11-15 09:28:28.347794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:39.905 [2024-11-15 09:28:28.347995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.905 [2024-11-15 09:28:28.348040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:39.905 [2024-11-15 09:28:28.348052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.905 [2024-11-15 09:28:28.348563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.905 [2024-11-15 09:28:28.348585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:39.905 [2024-11-15 09:28:28.348690] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:39.905 [2024-11-15 09:28:28.348715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:39.905 pt2 00:09:39.905 09:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.905 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:39.905 09:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.905 09:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.905 [2024-11-15 09:28:28.359805] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:39.905 09:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.905 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:39.905 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.905 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.905 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.905 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.905 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.905 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.905 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.905 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.906 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.165 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.165 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.165 09:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.165 09:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.165 09:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.165 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.165 "name": "raid_bdev1", 00:09:40.165 "uuid": "77e2c7a8-dfd8-4f4f-8f65-0caa7fc5b5b4", 00:09:40.165 "strip_size_kb": 64, 00:09:40.165 "state": "configuring", 00:09:40.165 "raid_level": "concat", 00:09:40.165 "superblock": true, 00:09:40.165 "num_base_bdevs": 3, 00:09:40.165 "num_base_bdevs_discovered": 1, 00:09:40.165 "num_base_bdevs_operational": 3, 00:09:40.165 "base_bdevs_list": [ 00:09:40.165 { 00:09:40.165 "name": "pt1", 00:09:40.165 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:40.165 "is_configured": true, 00:09:40.165 "data_offset": 2048, 00:09:40.165 "data_size": 63488 00:09:40.165 }, 00:09:40.165 { 00:09:40.165 "name": null, 00:09:40.165 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.165 "is_configured": false, 00:09:40.165 "data_offset": 0, 00:09:40.165 "data_size": 63488 00:09:40.165 }, 00:09:40.165 { 00:09:40.165 "name": null, 00:09:40.165 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:40.165 "is_configured": false, 00:09:40.165 "data_offset": 2048, 00:09:40.165 "data_size": 63488 00:09:40.165 } 00:09:40.165 ] 00:09:40.165 }' 00:09:40.165 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.165 09:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.424 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:40.424 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:40.424 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:40.424 09:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.424 09:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.424 [2024-11-15 09:28:28.830948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:40.424 [2024-11-15 09:28:28.831114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.424 [2024-11-15 09:28:28.831155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:40.424 [2024-11-15 09:28:28.831192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.424 [2024-11-15 09:28:28.831722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.424 [2024-11-15 09:28:28.831794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:40.424 [2024-11-15 09:28:28.831929] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:40.424 [2024-11-15 09:28:28.831995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:40.424 pt2 00:09:40.424 09:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.424 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:40.424 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:40.424 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:40.424 09:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.424 09:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.424 [2024-11-15 09:28:28.842886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:40.424 [2024-11-15 09:28:28.842976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.424 [2024-11-15 09:28:28.843007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:40.424 [2024-11-15 09:28:28.843034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.424 [2024-11-15 09:28:28.843433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.424 [2024-11-15 09:28:28.843500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:40.424 [2024-11-15 09:28:28.843589] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:40.424 [2024-11-15 09:28:28.843639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:40.424 [2024-11-15 09:28:28.843802] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:40.424 [2024-11-15 09:28:28.843841] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:40.424 [2024-11-15 09:28:28.844158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:40.424 [2024-11-15 09:28:28.844346] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:40.424 [2024-11-15 09:28:28.844387] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:40.424 [2024-11-15 09:28:28.844584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.424 pt3 00:09:40.424 09:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.424 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:40.424 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:40.425 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:40.425 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.425 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.425 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.425 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.425 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.425 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.425 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.425 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.425 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.425 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.425 09:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.425 09:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.425 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.425 09:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.683 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.683 "name": "raid_bdev1", 00:09:40.683 "uuid": "77e2c7a8-dfd8-4f4f-8f65-0caa7fc5b5b4", 00:09:40.683 "strip_size_kb": 64, 00:09:40.683 "state": "online", 00:09:40.683 "raid_level": "concat", 00:09:40.683 "superblock": true, 00:09:40.683 "num_base_bdevs": 3, 00:09:40.683 "num_base_bdevs_discovered": 3, 00:09:40.683 "num_base_bdevs_operational": 3, 00:09:40.683 "base_bdevs_list": [ 00:09:40.683 { 00:09:40.683 "name": "pt1", 00:09:40.683 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:40.683 "is_configured": true, 00:09:40.683 "data_offset": 2048, 00:09:40.683 "data_size": 63488 00:09:40.683 }, 00:09:40.683 { 00:09:40.683 "name": "pt2", 00:09:40.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.683 "is_configured": true, 00:09:40.683 "data_offset": 2048, 00:09:40.683 "data_size": 63488 00:09:40.683 }, 00:09:40.683 { 00:09:40.683 "name": "pt3", 00:09:40.683 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:40.683 "is_configured": true, 00:09:40.683 "data_offset": 2048, 00:09:40.683 "data_size": 63488 00:09:40.683 } 00:09:40.683 ] 00:09:40.683 }' 00:09:40.683 09:28:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.683 09:28:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.951 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:40.951 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:40.951 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:40.951 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:40.951 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:40.951 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:40.951 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:40.951 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:40.951 09:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.951 09:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.951 [2024-11-15 09:28:29.282549] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.951 09:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.951 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:40.951 "name": "raid_bdev1", 00:09:40.951 "aliases": [ 00:09:40.951 "77e2c7a8-dfd8-4f4f-8f65-0caa7fc5b5b4" 00:09:40.951 ], 00:09:40.951 "product_name": "Raid Volume", 00:09:40.951 "block_size": 512, 00:09:40.951 "num_blocks": 190464, 00:09:40.951 "uuid": "77e2c7a8-dfd8-4f4f-8f65-0caa7fc5b5b4", 00:09:40.951 "assigned_rate_limits": { 00:09:40.951 "rw_ios_per_sec": 0, 00:09:40.951 "rw_mbytes_per_sec": 0, 00:09:40.951 "r_mbytes_per_sec": 0, 00:09:40.951 "w_mbytes_per_sec": 0 00:09:40.951 }, 00:09:40.951 "claimed": false, 00:09:40.951 "zoned": false, 00:09:40.951 "supported_io_types": { 00:09:40.951 "read": true, 00:09:40.951 "write": true, 00:09:40.951 "unmap": true, 00:09:40.951 "flush": true, 00:09:40.951 "reset": true, 00:09:40.951 "nvme_admin": false, 00:09:40.951 "nvme_io": false, 00:09:40.951 "nvme_io_md": false, 00:09:40.951 "write_zeroes": true, 00:09:40.951 "zcopy": false, 00:09:40.951 "get_zone_info": false, 00:09:40.951 "zone_management": false, 00:09:40.951 "zone_append": false, 00:09:40.951 "compare": false, 00:09:40.951 "compare_and_write": false, 00:09:40.951 "abort": false, 00:09:40.951 "seek_hole": false, 00:09:40.951 "seek_data": false, 00:09:40.951 "copy": false, 00:09:40.951 "nvme_iov_md": false 00:09:40.951 }, 00:09:40.951 "memory_domains": [ 00:09:40.951 { 00:09:40.951 "dma_device_id": "system", 00:09:40.951 "dma_device_type": 1 00:09:40.951 }, 00:09:40.951 { 00:09:40.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.951 "dma_device_type": 2 00:09:40.951 }, 00:09:40.951 { 00:09:40.951 "dma_device_id": "system", 00:09:40.951 "dma_device_type": 1 00:09:40.951 }, 00:09:40.951 { 00:09:40.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.951 "dma_device_type": 2 00:09:40.951 }, 00:09:40.951 { 00:09:40.951 "dma_device_id": "system", 00:09:40.951 "dma_device_type": 1 00:09:40.951 }, 00:09:40.951 { 00:09:40.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.951 "dma_device_type": 2 00:09:40.951 } 00:09:40.951 ], 00:09:40.951 "driver_specific": { 00:09:40.951 "raid": { 00:09:40.951 "uuid": "77e2c7a8-dfd8-4f4f-8f65-0caa7fc5b5b4", 00:09:40.951 "strip_size_kb": 64, 00:09:40.951 "state": "online", 00:09:40.951 "raid_level": "concat", 00:09:40.951 "superblock": true, 00:09:40.951 "num_base_bdevs": 3, 00:09:40.951 "num_base_bdevs_discovered": 3, 00:09:40.951 "num_base_bdevs_operational": 3, 00:09:40.951 "base_bdevs_list": [ 00:09:40.951 { 00:09:40.951 "name": "pt1", 00:09:40.951 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:40.951 "is_configured": true, 00:09:40.951 "data_offset": 2048, 00:09:40.951 "data_size": 63488 00:09:40.951 }, 00:09:40.951 { 00:09:40.951 "name": "pt2", 00:09:40.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.951 "is_configured": true, 00:09:40.951 "data_offset": 2048, 00:09:40.951 "data_size": 63488 00:09:40.951 }, 00:09:40.951 { 00:09:40.951 "name": "pt3", 00:09:40.951 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:40.951 "is_configured": true, 00:09:40.951 "data_offset": 2048, 00:09:40.951 "data_size": 63488 00:09:40.951 } 00:09:40.951 ] 00:09:40.951 } 00:09:40.951 } 00:09:40.951 }' 00:09:40.951 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:40.951 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:40.951 pt2 00:09:40.951 pt3' 00:09:40.951 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.209 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:41.209 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.209 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:41.209 09:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.209 09:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.209 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.209 09:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.209 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.209 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.209 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.209 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:41.209 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.209 09:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.209 09:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.209 09:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.209 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.209 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.209 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.209 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.209 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:41.210 09:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.210 09:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.210 09:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.210 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.210 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.210 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:41.210 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:41.210 09:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.210 09:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.210 [2024-11-15 09:28:29.570105] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.210 09:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.210 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 77e2c7a8-dfd8-4f4f-8f65-0caa7fc5b5b4 '!=' 77e2c7a8-dfd8-4f4f-8f65-0caa7fc5b5b4 ']' 00:09:41.210 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:41.210 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:41.210 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:41.210 09:28:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67177 00:09:41.210 09:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 67177 ']' 00:09:41.210 09:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 67177 00:09:41.210 09:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:41.210 09:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:41.210 09:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67177 00:09:41.210 09:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:41.210 09:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:41.210 09:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67177' 00:09:41.210 killing process with pid 67177 00:09:41.210 09:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 67177 00:09:41.210 [2024-11-15 09:28:29.643450] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:41.210 [2024-11-15 09:28:29.643577] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.210 09:28:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 67177 00:09:41.210 [2024-11-15 09:28:29.643649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.210 [2024-11-15 09:28:29.643664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:41.776 [2024-11-15 09:28:29.977316] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:43.153 09:28:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:43.153 ************************************ 00:09:43.153 END TEST raid_superblock_test 00:09:43.153 ************************************ 00:09:43.153 00:09:43.153 real 0m5.462s 00:09:43.153 user 0m7.808s 00:09:43.153 sys 0m0.899s 00:09:43.153 09:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:43.153 09:28:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.153 09:28:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:43.153 09:28:31 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:43.153 09:28:31 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:43.153 09:28:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:43.153 ************************************ 00:09:43.153 START TEST raid_read_error_test 00:09:43.153 ************************************ 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 read 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LWhTNccM6f 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67432 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67432 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 67432 ']' 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:43.153 09:28:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.153 [2024-11-15 09:28:31.370141] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:09:43.153 [2024-11-15 09:28:31.370405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67432 ] 00:09:43.153 [2024-11-15 09:28:31.555412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.412 [2024-11-15 09:28:31.676488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.671 [2024-11-15 09:28:31.901232] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.671 [2024-11-15 09:28:31.901314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.929 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:43.929 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:43.929 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:43.929 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:43.929 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.929 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.929 BaseBdev1_malloc 00:09:43.929 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.929 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:43.929 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.929 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.929 true 00:09:43.929 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.929 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:43.929 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.929 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.929 [2024-11-15 09:28:32.326315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:43.929 [2024-11-15 09:28:32.326385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.929 [2024-11-15 09:28:32.326404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:43.929 [2024-11-15 09:28:32.326417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.929 [2024-11-15 09:28:32.328566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.929 [2024-11-15 09:28:32.328718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:43.929 BaseBdev1 00:09:43.929 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.929 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:43.929 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:43.929 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.929 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.929 BaseBdev2_malloc 00:09:43.929 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.930 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:43.930 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.930 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.930 true 00:09:43.930 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.930 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:43.930 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.930 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.189 [2024-11-15 09:28:32.396773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:44.190 [2024-11-15 09:28:32.396956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.190 [2024-11-15 09:28:32.396986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:44.190 [2024-11-15 09:28:32.397002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.190 [2024-11-15 09:28:32.399509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.190 [2024-11-15 09:28:32.399559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:44.190 BaseBdev2 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.190 BaseBdev3_malloc 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.190 true 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.190 [2024-11-15 09:28:32.483465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:44.190 [2024-11-15 09:28:32.483532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.190 [2024-11-15 09:28:32.483566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:44.190 [2024-11-15 09:28:32.483578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.190 [2024-11-15 09:28:32.485854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.190 [2024-11-15 09:28:32.485905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:44.190 BaseBdev3 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.190 [2024-11-15 09:28:32.495521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.190 [2024-11-15 09:28:32.497372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:44.190 [2024-11-15 09:28:32.497536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:44.190 [2024-11-15 09:28:32.497749] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:44.190 [2024-11-15 09:28:32.497762] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:44.190 [2024-11-15 09:28:32.498025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:44.190 [2024-11-15 09:28:32.498180] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:44.190 [2024-11-15 09:28:32.498194] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:44.190 [2024-11-15 09:28:32.498348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.190 "name": "raid_bdev1", 00:09:44.190 "uuid": "314f1b72-be77-4242-8d5f-de62e815d25e", 00:09:44.190 "strip_size_kb": 64, 00:09:44.190 "state": "online", 00:09:44.190 "raid_level": "concat", 00:09:44.190 "superblock": true, 00:09:44.190 "num_base_bdevs": 3, 00:09:44.190 "num_base_bdevs_discovered": 3, 00:09:44.190 "num_base_bdevs_operational": 3, 00:09:44.190 "base_bdevs_list": [ 00:09:44.190 { 00:09:44.190 "name": "BaseBdev1", 00:09:44.190 "uuid": "895dbf32-e32b-5daa-a8c9-a47e71d97520", 00:09:44.190 "is_configured": true, 00:09:44.190 "data_offset": 2048, 00:09:44.190 "data_size": 63488 00:09:44.190 }, 00:09:44.190 { 00:09:44.190 "name": "BaseBdev2", 00:09:44.190 "uuid": "622ff0a5-0c42-52b0-b21b-58bb5dca55fc", 00:09:44.190 "is_configured": true, 00:09:44.190 "data_offset": 2048, 00:09:44.190 "data_size": 63488 00:09:44.190 }, 00:09:44.190 { 00:09:44.190 "name": "BaseBdev3", 00:09:44.190 "uuid": "6fa767cf-53df-5f9b-85b9-07c52585132e", 00:09:44.190 "is_configured": true, 00:09:44.190 "data_offset": 2048, 00:09:44.190 "data_size": 63488 00:09:44.190 } 00:09:44.190 ] 00:09:44.190 }' 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.190 09:28:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.758 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:44.758 09:28:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:44.758 [2024-11-15 09:28:33.064078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:45.696 09:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:45.696 09:28:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.696 09:28:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.696 09:28:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.696 09:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:45.696 09:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:45.696 09:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:45.696 09:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:45.696 09:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.696 09:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.696 09:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.696 09:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.696 09:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.696 09:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.696 09:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.696 09:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.696 09:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.696 09:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.696 09:28:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.696 09:28:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.696 09:28:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.696 09:28:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.696 09:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.696 "name": "raid_bdev1", 00:09:45.696 "uuid": "314f1b72-be77-4242-8d5f-de62e815d25e", 00:09:45.696 "strip_size_kb": 64, 00:09:45.696 "state": "online", 00:09:45.696 "raid_level": "concat", 00:09:45.696 "superblock": true, 00:09:45.696 "num_base_bdevs": 3, 00:09:45.696 "num_base_bdevs_discovered": 3, 00:09:45.696 "num_base_bdevs_operational": 3, 00:09:45.696 "base_bdevs_list": [ 00:09:45.696 { 00:09:45.696 "name": "BaseBdev1", 00:09:45.696 "uuid": "895dbf32-e32b-5daa-a8c9-a47e71d97520", 00:09:45.696 "is_configured": true, 00:09:45.696 "data_offset": 2048, 00:09:45.696 "data_size": 63488 00:09:45.696 }, 00:09:45.696 { 00:09:45.696 "name": "BaseBdev2", 00:09:45.696 "uuid": "622ff0a5-0c42-52b0-b21b-58bb5dca55fc", 00:09:45.696 "is_configured": true, 00:09:45.696 "data_offset": 2048, 00:09:45.696 "data_size": 63488 00:09:45.696 }, 00:09:45.696 { 00:09:45.696 "name": "BaseBdev3", 00:09:45.696 "uuid": "6fa767cf-53df-5f9b-85b9-07c52585132e", 00:09:45.696 "is_configured": true, 00:09:45.696 "data_offset": 2048, 00:09:45.696 "data_size": 63488 00:09:45.696 } 00:09:45.696 ] 00:09:45.696 }' 00:09:45.696 09:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.696 09:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.265 09:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:46.265 09:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.265 09:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.265 [2024-11-15 09:28:34.428780] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:46.265 [2024-11-15 09:28:34.428829] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:46.265 [2024-11-15 09:28:34.431649] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.265 [2024-11-15 09:28:34.431698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.265 [2024-11-15 09:28:34.431735] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:46.265 [2024-11-15 09:28:34.431747] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:46.265 09:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.265 09:28:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67432 00:09:46.265 09:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 67432 ']' 00:09:46.265 09:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 67432 00:09:46.265 { 00:09:46.265 "results": [ 00:09:46.265 { 00:09:46.265 "job": "raid_bdev1", 00:09:46.265 "core_mask": "0x1", 00:09:46.265 "workload": "randrw", 00:09:46.265 "percentage": 50, 00:09:46.265 "status": "finished", 00:09:46.265 "queue_depth": 1, 00:09:46.265 "io_size": 131072, 00:09:46.265 "runtime": 1.365397, 00:09:46.265 "iops": 14372.37667872421, 00:09:46.265 "mibps": 1796.5470848405262, 00:09:46.265 "io_failed": 1, 00:09:46.265 "io_timeout": 0, 00:09:46.265 "avg_latency_us": 96.6868586877312, 00:09:46.265 "min_latency_us": 26.606113537117903, 00:09:46.265 "max_latency_us": 1566.8541484716156 00:09:46.265 } 00:09:46.265 ], 00:09:46.265 "core_count": 1 00:09:46.265 } 00:09:46.265 09:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:09:46.265 09:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:46.265 09:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67432 00:09:46.265 09:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:46.265 09:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:46.265 09:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67432' 00:09:46.265 killing process with pid 67432 00:09:46.265 09:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 67432 00:09:46.265 09:28:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 67432 00:09:46.265 [2024-11-15 09:28:34.477611] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:46.525 [2024-11-15 09:28:34.732499] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:47.904 09:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LWhTNccM6f 00:09:47.904 09:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:47.904 09:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:47.904 ************************************ 00:09:47.904 END TEST raid_read_error_test 00:09:47.904 ************************************ 00:09:47.904 09:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:47.904 09:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:47.904 09:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:47.904 09:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:47.904 09:28:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:47.904 00:09:47.904 real 0m4.785s 00:09:47.904 user 0m5.674s 00:09:47.904 sys 0m0.625s 00:09:47.904 09:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:47.904 09:28:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.904 09:28:36 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:47.904 09:28:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:47.904 09:28:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:47.904 09:28:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:47.904 ************************************ 00:09:47.904 START TEST raid_write_error_test 00:09:47.904 ************************************ 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 write 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9VWeOIOvur 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67579 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67579 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 67579 ']' 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.904 09:28:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:47.904 [2024-11-15 09:28:36.208510] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:09:47.904 [2024-11-15 09:28:36.208675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67579 ] 00:09:48.163 [2024-11-15 09:28:36.395059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.163 [2024-11-15 09:28:36.509333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.422 [2024-11-15 09:28:36.730562] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.422 [2024-11-15 09:28:36.730617] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.681 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:48.681 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:48.681 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:48.681 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:48.681 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.681 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.681 BaseBdev1_malloc 00:09:48.681 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.681 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:48.682 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.682 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.682 true 00:09:48.682 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.682 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:48.682 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.682 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.682 [2024-11-15 09:28:37.123621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:48.682 [2024-11-15 09:28:37.123701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.682 [2024-11-15 09:28:37.123722] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:48.682 [2024-11-15 09:28:37.123733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.682 [2024-11-15 09:28:37.126145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.682 [2024-11-15 09:28:37.126194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:48.682 BaseBdev1 00:09:48.682 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.682 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:48.682 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:48.682 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.682 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.941 BaseBdev2_malloc 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.941 true 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.941 [2024-11-15 09:28:37.193193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:48.941 [2024-11-15 09:28:37.193260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.941 [2024-11-15 09:28:37.193279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:48.941 [2024-11-15 09:28:37.193290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.941 [2024-11-15 09:28:37.195535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.941 [2024-11-15 09:28:37.195580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:48.941 BaseBdev2 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.941 BaseBdev3_malloc 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.941 true 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.941 [2024-11-15 09:28:37.274116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:48.941 [2024-11-15 09:28:37.274187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.941 [2024-11-15 09:28:37.274203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:48.941 [2024-11-15 09:28:37.274214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.941 [2024-11-15 09:28:37.276589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.941 [2024-11-15 09:28:37.276638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:48.941 BaseBdev3 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.941 [2024-11-15 09:28:37.286186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.941 [2024-11-15 09:28:37.288244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:48.941 [2024-11-15 09:28:37.288333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:48.941 [2024-11-15 09:28:37.288553] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:48.941 [2024-11-15 09:28:37.288566] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:48.941 [2024-11-15 09:28:37.288867] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:48.941 [2024-11-15 09:28:37.289051] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:48.941 [2024-11-15 09:28:37.289067] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:48.941 [2024-11-15 09:28:37.289234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.941 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.941 "name": "raid_bdev1", 00:09:48.941 "uuid": "e9397e98-7223-4a4e-9e15-423a3b452c61", 00:09:48.941 "strip_size_kb": 64, 00:09:48.941 "state": "online", 00:09:48.941 "raid_level": "concat", 00:09:48.941 "superblock": true, 00:09:48.941 "num_base_bdevs": 3, 00:09:48.941 "num_base_bdevs_discovered": 3, 00:09:48.941 "num_base_bdevs_operational": 3, 00:09:48.941 "base_bdevs_list": [ 00:09:48.941 { 00:09:48.941 "name": "BaseBdev1", 00:09:48.941 "uuid": "69d96d40-1943-5e20-8aa7-af5a6f253d03", 00:09:48.941 "is_configured": true, 00:09:48.941 "data_offset": 2048, 00:09:48.941 "data_size": 63488 00:09:48.941 }, 00:09:48.941 { 00:09:48.941 "name": "BaseBdev2", 00:09:48.941 "uuid": "9770727e-20cc-5d4b-a244-e82eaf2c9669", 00:09:48.941 "is_configured": true, 00:09:48.941 "data_offset": 2048, 00:09:48.941 "data_size": 63488 00:09:48.941 }, 00:09:48.941 { 00:09:48.941 "name": "BaseBdev3", 00:09:48.941 "uuid": "207f51ef-1060-5d8c-a82f-704548d86c43", 00:09:48.941 "is_configured": true, 00:09:48.941 "data_offset": 2048, 00:09:48.941 "data_size": 63488 00:09:48.941 } 00:09:48.941 ] 00:09:48.941 }' 00:09:48.942 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.942 09:28:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.508 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:49.508 09:28:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:49.508 [2024-11-15 09:28:37.842646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:50.444 09:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:50.444 09:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.444 09:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.444 09:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.444 09:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:50.444 09:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:50.444 09:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:50.444 09:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:50.444 09:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.444 09:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.444 09:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.444 09:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.444 09:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.444 09:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.444 09:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.445 09:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.445 09:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.445 09:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.445 09:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.445 09:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.445 09:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.445 09:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.445 09:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.445 "name": "raid_bdev1", 00:09:50.445 "uuid": "e9397e98-7223-4a4e-9e15-423a3b452c61", 00:09:50.445 "strip_size_kb": 64, 00:09:50.445 "state": "online", 00:09:50.445 "raid_level": "concat", 00:09:50.445 "superblock": true, 00:09:50.445 "num_base_bdevs": 3, 00:09:50.445 "num_base_bdevs_discovered": 3, 00:09:50.445 "num_base_bdevs_operational": 3, 00:09:50.445 "base_bdevs_list": [ 00:09:50.445 { 00:09:50.445 "name": "BaseBdev1", 00:09:50.445 "uuid": "69d96d40-1943-5e20-8aa7-af5a6f253d03", 00:09:50.445 "is_configured": true, 00:09:50.445 "data_offset": 2048, 00:09:50.445 "data_size": 63488 00:09:50.445 }, 00:09:50.445 { 00:09:50.445 "name": "BaseBdev2", 00:09:50.445 "uuid": "9770727e-20cc-5d4b-a244-e82eaf2c9669", 00:09:50.445 "is_configured": true, 00:09:50.445 "data_offset": 2048, 00:09:50.445 "data_size": 63488 00:09:50.445 }, 00:09:50.445 { 00:09:50.445 "name": "BaseBdev3", 00:09:50.445 "uuid": "207f51ef-1060-5d8c-a82f-704548d86c43", 00:09:50.445 "is_configured": true, 00:09:50.445 "data_offset": 2048, 00:09:50.445 "data_size": 63488 00:09:50.445 } 00:09:50.445 ] 00:09:50.445 }' 00:09:50.445 09:28:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.445 09:28:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.012 09:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:51.012 09:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.012 09:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.012 [2024-11-15 09:28:39.255585] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.012 [2024-11-15 09:28:39.255634] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.012 [2024-11-15 09:28:39.258505] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.012 [2024-11-15 09:28:39.258555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.012 [2024-11-15 09:28:39.258594] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.012 [2024-11-15 09:28:39.258606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:51.012 { 00:09:51.012 "results": [ 00:09:51.012 { 00:09:51.012 "job": "raid_bdev1", 00:09:51.012 "core_mask": "0x1", 00:09:51.012 "workload": "randrw", 00:09:51.012 "percentage": 50, 00:09:51.012 "status": "finished", 00:09:51.012 "queue_depth": 1, 00:09:51.012 "io_size": 131072, 00:09:51.012 "runtime": 1.413474, 00:09:51.012 "iops": 14538.647332741883, 00:09:51.012 "mibps": 1817.3309165927353, 00:09:51.012 "io_failed": 1, 00:09:51.012 "io_timeout": 0, 00:09:51.012 "avg_latency_us": 95.45764306882504, 00:09:51.012 "min_latency_us": 27.83580786026201, 00:09:51.012 "max_latency_us": 1652.709170305677 00:09:51.012 } 00:09:51.012 ], 00:09:51.012 "core_count": 1 00:09:51.012 } 00:09:51.012 09:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.012 09:28:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67579 00:09:51.012 09:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 67579 ']' 00:09:51.012 09:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 67579 00:09:51.012 09:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:51.012 09:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:51.012 09:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67579 00:09:51.012 09:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:51.012 killing process with pid 67579 00:09:51.012 09:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:51.012 09:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67579' 00:09:51.012 09:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 67579 00:09:51.012 [2024-11-15 09:28:39.307879] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.012 09:28:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 67579 00:09:51.270 [2024-11-15 09:28:39.563823] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:52.646 09:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:52.646 09:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:52.646 09:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9VWeOIOvur 00:09:52.646 09:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:52.646 09:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:52.646 09:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:52.646 09:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:52.647 09:28:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:52.647 00:09:52.647 real 0m4.750s 00:09:52.647 user 0m5.672s 00:09:52.647 sys 0m0.573s 00:09:52.647 09:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:52.647 09:28:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.647 ************************************ 00:09:52.647 END TEST raid_write_error_test 00:09:52.647 ************************************ 00:09:52.647 09:28:40 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:52.647 09:28:40 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:52.647 09:28:40 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:52.647 09:28:40 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:52.647 09:28:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:52.647 ************************************ 00:09:52.647 START TEST raid_state_function_test 00:09:52.647 ************************************ 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 false 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:52.647 Process raid pid: 67723 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67723 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67723' 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67723 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 67723 ']' 00:09:52.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:52.647 09:28:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.647 [2024-11-15 09:28:41.031957] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:09:52.647 [2024-11-15 09:28:41.032127] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.906 [2024-11-15 09:28:41.220650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.906 [2024-11-15 09:28:41.346834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.165 [2024-11-15 09:28:41.571497] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.165 [2024-11-15 09:28:41.571639] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.732 09:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:53.732 09:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:53.732 09:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:53.732 09:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.733 09:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.733 [2024-11-15 09:28:41.925148] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:53.733 [2024-11-15 09:28:41.925325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:53.733 [2024-11-15 09:28:41.925361] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.733 [2024-11-15 09:28:41.925385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.733 [2024-11-15 09:28:41.925404] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:53.733 [2024-11-15 09:28:41.925426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:53.733 09:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.733 09:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:53.733 09:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.733 09:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.733 09:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.733 09:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.733 09:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.733 09:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.733 09:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.733 09:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.733 09:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.733 09:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.733 09:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.733 09:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.733 09:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.733 09:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.733 09:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.733 "name": "Existed_Raid", 00:09:53.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.733 "strip_size_kb": 0, 00:09:53.733 "state": "configuring", 00:09:53.733 "raid_level": "raid1", 00:09:53.733 "superblock": false, 00:09:53.733 "num_base_bdevs": 3, 00:09:53.733 "num_base_bdevs_discovered": 0, 00:09:53.733 "num_base_bdevs_operational": 3, 00:09:53.733 "base_bdevs_list": [ 00:09:53.733 { 00:09:53.733 "name": "BaseBdev1", 00:09:53.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.733 "is_configured": false, 00:09:53.733 "data_offset": 0, 00:09:53.733 "data_size": 0 00:09:53.733 }, 00:09:53.733 { 00:09:53.733 "name": "BaseBdev2", 00:09:53.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.733 "is_configured": false, 00:09:53.733 "data_offset": 0, 00:09:53.733 "data_size": 0 00:09:53.733 }, 00:09:53.733 { 00:09:53.733 "name": "BaseBdev3", 00:09:53.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.733 "is_configured": false, 00:09:53.733 "data_offset": 0, 00:09:53.733 "data_size": 0 00:09:53.733 } 00:09:53.733 ] 00:09:53.733 }' 00:09:53.733 09:28:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.733 09:28:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.993 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:53.993 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.993 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.993 [2024-11-15 09:28:42.356360] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:53.993 [2024-11-15 09:28:42.356498] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:53.993 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.993 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:53.993 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.993 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.993 [2024-11-15 09:28:42.368311] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:53.993 [2024-11-15 09:28:42.368365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:53.993 [2024-11-15 09:28:42.368376] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.993 [2024-11-15 09:28:42.368388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.993 [2024-11-15 09:28:42.368395] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:53.993 [2024-11-15 09:28:42.368407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:53.993 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.993 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:53.993 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.993 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.993 [2024-11-15 09:28:42.419328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.993 BaseBdev1 00:09:53.993 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.993 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:53.993 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:53.993 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:53.993 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:53.993 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:53.993 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:53.993 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:53.993 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.993 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.993 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.993 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:53.993 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.993 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.993 [ 00:09:53.993 { 00:09:53.993 "name": "BaseBdev1", 00:09:53.993 "aliases": [ 00:09:53.993 "6661928b-5213-48d6-bfbd-64289cb06aa1" 00:09:53.993 ], 00:09:53.993 "product_name": "Malloc disk", 00:09:53.993 "block_size": 512, 00:09:53.993 "num_blocks": 65536, 00:09:53.993 "uuid": "6661928b-5213-48d6-bfbd-64289cb06aa1", 00:09:53.993 "assigned_rate_limits": { 00:09:53.993 "rw_ios_per_sec": 0, 00:09:53.993 "rw_mbytes_per_sec": 0, 00:09:53.993 "r_mbytes_per_sec": 0, 00:09:53.993 "w_mbytes_per_sec": 0 00:09:53.993 }, 00:09:53.993 "claimed": true, 00:09:53.993 "claim_type": "exclusive_write", 00:09:53.993 "zoned": false, 00:09:53.993 "supported_io_types": { 00:09:53.993 "read": true, 00:09:53.993 "write": true, 00:09:53.993 "unmap": true, 00:09:53.993 "flush": true, 00:09:53.993 "reset": true, 00:09:53.993 "nvme_admin": false, 00:09:53.993 "nvme_io": false, 00:09:53.993 "nvme_io_md": false, 00:09:53.993 "write_zeroes": true, 00:09:53.993 "zcopy": true, 00:09:53.993 "get_zone_info": false, 00:09:53.993 "zone_management": false, 00:09:53.993 "zone_append": false, 00:09:53.993 "compare": false, 00:09:53.993 "compare_and_write": false, 00:09:53.993 "abort": true, 00:09:53.993 "seek_hole": false, 00:09:53.993 "seek_data": false, 00:09:54.252 "copy": true, 00:09:54.252 "nvme_iov_md": false 00:09:54.252 }, 00:09:54.252 "memory_domains": [ 00:09:54.252 { 00:09:54.252 "dma_device_id": "system", 00:09:54.252 "dma_device_type": 1 00:09:54.252 }, 00:09:54.252 { 00:09:54.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.252 "dma_device_type": 2 00:09:54.252 } 00:09:54.252 ], 00:09:54.252 "driver_specific": {} 00:09:54.252 } 00:09:54.252 ] 00:09:54.252 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.252 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:54.252 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:54.252 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.252 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.252 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.252 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.252 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.252 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.252 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.252 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.252 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.252 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.252 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.252 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.252 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.252 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.252 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.252 "name": "Existed_Raid", 00:09:54.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.252 "strip_size_kb": 0, 00:09:54.252 "state": "configuring", 00:09:54.252 "raid_level": "raid1", 00:09:54.252 "superblock": false, 00:09:54.252 "num_base_bdevs": 3, 00:09:54.252 "num_base_bdevs_discovered": 1, 00:09:54.252 "num_base_bdevs_operational": 3, 00:09:54.252 "base_bdevs_list": [ 00:09:54.252 { 00:09:54.252 "name": "BaseBdev1", 00:09:54.252 "uuid": "6661928b-5213-48d6-bfbd-64289cb06aa1", 00:09:54.252 "is_configured": true, 00:09:54.252 "data_offset": 0, 00:09:54.252 "data_size": 65536 00:09:54.252 }, 00:09:54.252 { 00:09:54.252 "name": "BaseBdev2", 00:09:54.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.252 "is_configured": false, 00:09:54.252 "data_offset": 0, 00:09:54.252 "data_size": 0 00:09:54.252 }, 00:09:54.252 { 00:09:54.252 "name": "BaseBdev3", 00:09:54.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.252 "is_configured": false, 00:09:54.252 "data_offset": 0, 00:09:54.252 "data_size": 0 00:09:54.252 } 00:09:54.252 ] 00:09:54.252 }' 00:09:54.252 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.252 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.511 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:54.511 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.511 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.511 [2024-11-15 09:28:42.918570] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:54.511 [2024-11-15 09:28:42.918731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:54.511 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.511 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:54.511 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.511 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.511 [2024-11-15 09:28:42.926618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.511 [2024-11-15 09:28:42.928763] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.511 [2024-11-15 09:28:42.928914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.511 [2024-11-15 09:28:42.928961] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:54.511 [2024-11-15 09:28:42.928989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:54.511 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.511 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:54.511 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:54.511 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:54.511 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.511 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.511 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.511 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.511 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.511 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.511 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.511 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.511 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.511 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.511 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.511 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.511 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.511 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.770 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.770 "name": "Existed_Raid", 00:09:54.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.770 "strip_size_kb": 0, 00:09:54.770 "state": "configuring", 00:09:54.770 "raid_level": "raid1", 00:09:54.770 "superblock": false, 00:09:54.770 "num_base_bdevs": 3, 00:09:54.770 "num_base_bdevs_discovered": 1, 00:09:54.770 "num_base_bdevs_operational": 3, 00:09:54.770 "base_bdevs_list": [ 00:09:54.770 { 00:09:54.770 "name": "BaseBdev1", 00:09:54.770 "uuid": "6661928b-5213-48d6-bfbd-64289cb06aa1", 00:09:54.770 "is_configured": true, 00:09:54.770 "data_offset": 0, 00:09:54.770 "data_size": 65536 00:09:54.770 }, 00:09:54.770 { 00:09:54.770 "name": "BaseBdev2", 00:09:54.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.770 "is_configured": false, 00:09:54.770 "data_offset": 0, 00:09:54.770 "data_size": 0 00:09:54.770 }, 00:09:54.770 { 00:09:54.770 "name": "BaseBdev3", 00:09:54.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.770 "is_configured": false, 00:09:54.770 "data_offset": 0, 00:09:54.770 "data_size": 0 00:09:54.770 } 00:09:54.770 ] 00:09:54.770 }' 00:09:54.770 09:28:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.770 09:28:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.029 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:55.029 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.029 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.029 [2024-11-15 09:28:43.391301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:55.029 BaseBdev2 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.030 [ 00:09:55.030 { 00:09:55.030 "name": "BaseBdev2", 00:09:55.030 "aliases": [ 00:09:55.030 "b0e55be3-832a-48fe-b57b-c1713663dd2f" 00:09:55.030 ], 00:09:55.030 "product_name": "Malloc disk", 00:09:55.030 "block_size": 512, 00:09:55.030 "num_blocks": 65536, 00:09:55.030 "uuid": "b0e55be3-832a-48fe-b57b-c1713663dd2f", 00:09:55.030 "assigned_rate_limits": { 00:09:55.030 "rw_ios_per_sec": 0, 00:09:55.030 "rw_mbytes_per_sec": 0, 00:09:55.030 "r_mbytes_per_sec": 0, 00:09:55.030 "w_mbytes_per_sec": 0 00:09:55.030 }, 00:09:55.030 "claimed": true, 00:09:55.030 "claim_type": "exclusive_write", 00:09:55.030 "zoned": false, 00:09:55.030 "supported_io_types": { 00:09:55.030 "read": true, 00:09:55.030 "write": true, 00:09:55.030 "unmap": true, 00:09:55.030 "flush": true, 00:09:55.030 "reset": true, 00:09:55.030 "nvme_admin": false, 00:09:55.030 "nvme_io": false, 00:09:55.030 "nvme_io_md": false, 00:09:55.030 "write_zeroes": true, 00:09:55.030 "zcopy": true, 00:09:55.030 "get_zone_info": false, 00:09:55.030 "zone_management": false, 00:09:55.030 "zone_append": false, 00:09:55.030 "compare": false, 00:09:55.030 "compare_and_write": false, 00:09:55.030 "abort": true, 00:09:55.030 "seek_hole": false, 00:09:55.030 "seek_data": false, 00:09:55.030 "copy": true, 00:09:55.030 "nvme_iov_md": false 00:09:55.030 }, 00:09:55.030 "memory_domains": [ 00:09:55.030 { 00:09:55.030 "dma_device_id": "system", 00:09:55.030 "dma_device_type": 1 00:09:55.030 }, 00:09:55.030 { 00:09:55.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.030 "dma_device_type": 2 00:09:55.030 } 00:09:55.030 ], 00:09:55.030 "driver_specific": {} 00:09:55.030 } 00:09:55.030 ] 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.030 "name": "Existed_Raid", 00:09:55.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.030 "strip_size_kb": 0, 00:09:55.030 "state": "configuring", 00:09:55.030 "raid_level": "raid1", 00:09:55.030 "superblock": false, 00:09:55.030 "num_base_bdevs": 3, 00:09:55.030 "num_base_bdevs_discovered": 2, 00:09:55.030 "num_base_bdevs_operational": 3, 00:09:55.030 "base_bdevs_list": [ 00:09:55.030 { 00:09:55.030 "name": "BaseBdev1", 00:09:55.030 "uuid": "6661928b-5213-48d6-bfbd-64289cb06aa1", 00:09:55.030 "is_configured": true, 00:09:55.030 "data_offset": 0, 00:09:55.030 "data_size": 65536 00:09:55.030 }, 00:09:55.030 { 00:09:55.030 "name": "BaseBdev2", 00:09:55.030 "uuid": "b0e55be3-832a-48fe-b57b-c1713663dd2f", 00:09:55.030 "is_configured": true, 00:09:55.030 "data_offset": 0, 00:09:55.030 "data_size": 65536 00:09:55.030 }, 00:09:55.030 { 00:09:55.030 "name": "BaseBdev3", 00:09:55.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.030 "is_configured": false, 00:09:55.030 "data_offset": 0, 00:09:55.030 "data_size": 0 00:09:55.030 } 00:09:55.030 ] 00:09:55.030 }' 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.030 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.600 [2024-11-15 09:28:43.939011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:55.600 [2024-11-15 09:28:43.939071] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:55.600 [2024-11-15 09:28:43.939086] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:55.600 [2024-11-15 09:28:43.939422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:55.600 [2024-11-15 09:28:43.939609] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:55.600 [2024-11-15 09:28:43.939619] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:55.600 [2024-11-15 09:28:43.939932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.600 BaseBdev3 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.600 [ 00:09:55.600 { 00:09:55.600 "name": "BaseBdev3", 00:09:55.600 "aliases": [ 00:09:55.600 "93ae6dde-ec2e-48bf-8ee3-d54ded2eac89" 00:09:55.600 ], 00:09:55.600 "product_name": "Malloc disk", 00:09:55.600 "block_size": 512, 00:09:55.600 "num_blocks": 65536, 00:09:55.600 "uuid": "93ae6dde-ec2e-48bf-8ee3-d54ded2eac89", 00:09:55.600 "assigned_rate_limits": { 00:09:55.600 "rw_ios_per_sec": 0, 00:09:55.600 "rw_mbytes_per_sec": 0, 00:09:55.600 "r_mbytes_per_sec": 0, 00:09:55.600 "w_mbytes_per_sec": 0 00:09:55.600 }, 00:09:55.600 "claimed": true, 00:09:55.600 "claim_type": "exclusive_write", 00:09:55.600 "zoned": false, 00:09:55.600 "supported_io_types": { 00:09:55.600 "read": true, 00:09:55.600 "write": true, 00:09:55.600 "unmap": true, 00:09:55.600 "flush": true, 00:09:55.600 "reset": true, 00:09:55.600 "nvme_admin": false, 00:09:55.600 "nvme_io": false, 00:09:55.600 "nvme_io_md": false, 00:09:55.600 "write_zeroes": true, 00:09:55.600 "zcopy": true, 00:09:55.600 "get_zone_info": false, 00:09:55.600 "zone_management": false, 00:09:55.600 "zone_append": false, 00:09:55.600 "compare": false, 00:09:55.600 "compare_and_write": false, 00:09:55.600 "abort": true, 00:09:55.600 "seek_hole": false, 00:09:55.600 "seek_data": false, 00:09:55.600 "copy": true, 00:09:55.600 "nvme_iov_md": false 00:09:55.600 }, 00:09:55.600 "memory_domains": [ 00:09:55.600 { 00:09:55.600 "dma_device_id": "system", 00:09:55.600 "dma_device_type": 1 00:09:55.600 }, 00:09:55.600 { 00:09:55.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.600 "dma_device_type": 2 00:09:55.600 } 00:09:55.600 ], 00:09:55.600 "driver_specific": {} 00:09:55.600 } 00:09:55.600 ] 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.600 09:28:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.600 09:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.600 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.600 "name": "Existed_Raid", 00:09:55.600 "uuid": "a2e7e93e-5d71-4df9-80a7-c0ee4637697c", 00:09:55.600 "strip_size_kb": 0, 00:09:55.600 "state": "online", 00:09:55.600 "raid_level": "raid1", 00:09:55.600 "superblock": false, 00:09:55.600 "num_base_bdevs": 3, 00:09:55.600 "num_base_bdevs_discovered": 3, 00:09:55.600 "num_base_bdevs_operational": 3, 00:09:55.600 "base_bdevs_list": [ 00:09:55.600 { 00:09:55.600 "name": "BaseBdev1", 00:09:55.600 "uuid": "6661928b-5213-48d6-bfbd-64289cb06aa1", 00:09:55.600 "is_configured": true, 00:09:55.600 "data_offset": 0, 00:09:55.600 "data_size": 65536 00:09:55.600 }, 00:09:55.600 { 00:09:55.600 "name": "BaseBdev2", 00:09:55.600 "uuid": "b0e55be3-832a-48fe-b57b-c1713663dd2f", 00:09:55.600 "is_configured": true, 00:09:55.600 "data_offset": 0, 00:09:55.600 "data_size": 65536 00:09:55.600 }, 00:09:55.600 { 00:09:55.600 "name": "BaseBdev3", 00:09:55.600 "uuid": "93ae6dde-ec2e-48bf-8ee3-d54ded2eac89", 00:09:55.600 "is_configured": true, 00:09:55.600 "data_offset": 0, 00:09:55.600 "data_size": 65536 00:09:55.600 } 00:09:55.600 ] 00:09:55.600 }' 00:09:55.600 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.600 09:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.168 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:56.168 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:56.169 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:56.169 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:56.169 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:56.169 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:56.169 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:56.169 09:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.169 09:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.169 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:56.169 [2024-11-15 09:28:44.494567] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.169 09:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.169 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:56.169 "name": "Existed_Raid", 00:09:56.169 "aliases": [ 00:09:56.169 "a2e7e93e-5d71-4df9-80a7-c0ee4637697c" 00:09:56.169 ], 00:09:56.169 "product_name": "Raid Volume", 00:09:56.169 "block_size": 512, 00:09:56.169 "num_blocks": 65536, 00:09:56.169 "uuid": "a2e7e93e-5d71-4df9-80a7-c0ee4637697c", 00:09:56.169 "assigned_rate_limits": { 00:09:56.169 "rw_ios_per_sec": 0, 00:09:56.169 "rw_mbytes_per_sec": 0, 00:09:56.169 "r_mbytes_per_sec": 0, 00:09:56.169 "w_mbytes_per_sec": 0 00:09:56.169 }, 00:09:56.169 "claimed": false, 00:09:56.169 "zoned": false, 00:09:56.169 "supported_io_types": { 00:09:56.169 "read": true, 00:09:56.169 "write": true, 00:09:56.169 "unmap": false, 00:09:56.169 "flush": false, 00:09:56.169 "reset": true, 00:09:56.169 "nvme_admin": false, 00:09:56.169 "nvme_io": false, 00:09:56.169 "nvme_io_md": false, 00:09:56.169 "write_zeroes": true, 00:09:56.169 "zcopy": false, 00:09:56.169 "get_zone_info": false, 00:09:56.169 "zone_management": false, 00:09:56.169 "zone_append": false, 00:09:56.169 "compare": false, 00:09:56.169 "compare_and_write": false, 00:09:56.169 "abort": false, 00:09:56.169 "seek_hole": false, 00:09:56.169 "seek_data": false, 00:09:56.169 "copy": false, 00:09:56.169 "nvme_iov_md": false 00:09:56.169 }, 00:09:56.169 "memory_domains": [ 00:09:56.169 { 00:09:56.169 "dma_device_id": "system", 00:09:56.169 "dma_device_type": 1 00:09:56.169 }, 00:09:56.169 { 00:09:56.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.169 "dma_device_type": 2 00:09:56.169 }, 00:09:56.169 { 00:09:56.169 "dma_device_id": "system", 00:09:56.169 "dma_device_type": 1 00:09:56.169 }, 00:09:56.169 { 00:09:56.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.169 "dma_device_type": 2 00:09:56.169 }, 00:09:56.169 { 00:09:56.169 "dma_device_id": "system", 00:09:56.169 "dma_device_type": 1 00:09:56.169 }, 00:09:56.169 { 00:09:56.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.169 "dma_device_type": 2 00:09:56.169 } 00:09:56.169 ], 00:09:56.169 "driver_specific": { 00:09:56.169 "raid": { 00:09:56.169 "uuid": "a2e7e93e-5d71-4df9-80a7-c0ee4637697c", 00:09:56.169 "strip_size_kb": 0, 00:09:56.169 "state": "online", 00:09:56.169 "raid_level": "raid1", 00:09:56.169 "superblock": false, 00:09:56.169 "num_base_bdevs": 3, 00:09:56.169 "num_base_bdevs_discovered": 3, 00:09:56.169 "num_base_bdevs_operational": 3, 00:09:56.169 "base_bdevs_list": [ 00:09:56.169 { 00:09:56.169 "name": "BaseBdev1", 00:09:56.169 "uuid": "6661928b-5213-48d6-bfbd-64289cb06aa1", 00:09:56.169 "is_configured": true, 00:09:56.169 "data_offset": 0, 00:09:56.169 "data_size": 65536 00:09:56.169 }, 00:09:56.169 { 00:09:56.169 "name": "BaseBdev2", 00:09:56.169 "uuid": "b0e55be3-832a-48fe-b57b-c1713663dd2f", 00:09:56.169 "is_configured": true, 00:09:56.169 "data_offset": 0, 00:09:56.169 "data_size": 65536 00:09:56.169 }, 00:09:56.169 { 00:09:56.169 "name": "BaseBdev3", 00:09:56.169 "uuid": "93ae6dde-ec2e-48bf-8ee3-d54ded2eac89", 00:09:56.169 "is_configured": true, 00:09:56.169 "data_offset": 0, 00:09:56.169 "data_size": 65536 00:09:56.169 } 00:09:56.169 ] 00:09:56.169 } 00:09:56.169 } 00:09:56.169 }' 00:09:56.169 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:56.169 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:56.169 BaseBdev2 00:09:56.169 BaseBdev3' 00:09:56.169 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.169 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:56.169 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.436 [2024-11-15 09:28:44.777852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.436 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.437 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.437 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.437 09:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.437 09:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.695 09:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.695 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.695 "name": "Existed_Raid", 00:09:56.695 "uuid": "a2e7e93e-5d71-4df9-80a7-c0ee4637697c", 00:09:56.695 "strip_size_kb": 0, 00:09:56.695 "state": "online", 00:09:56.695 "raid_level": "raid1", 00:09:56.695 "superblock": false, 00:09:56.695 "num_base_bdevs": 3, 00:09:56.695 "num_base_bdevs_discovered": 2, 00:09:56.695 "num_base_bdevs_operational": 2, 00:09:56.695 "base_bdevs_list": [ 00:09:56.695 { 00:09:56.695 "name": null, 00:09:56.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.695 "is_configured": false, 00:09:56.695 "data_offset": 0, 00:09:56.695 "data_size": 65536 00:09:56.695 }, 00:09:56.695 { 00:09:56.695 "name": "BaseBdev2", 00:09:56.695 "uuid": "b0e55be3-832a-48fe-b57b-c1713663dd2f", 00:09:56.695 "is_configured": true, 00:09:56.695 "data_offset": 0, 00:09:56.695 "data_size": 65536 00:09:56.695 }, 00:09:56.695 { 00:09:56.695 "name": "BaseBdev3", 00:09:56.695 "uuid": "93ae6dde-ec2e-48bf-8ee3-d54ded2eac89", 00:09:56.695 "is_configured": true, 00:09:56.695 "data_offset": 0, 00:09:56.695 "data_size": 65536 00:09:56.695 } 00:09:56.695 ] 00:09:56.695 }' 00:09:56.695 09:28:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.695 09:28:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.953 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:56.953 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:56.953 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.953 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:56.953 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.953 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.953 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.212 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:57.212 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:57.212 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:57.212 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.212 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.212 [2024-11-15 09:28:45.437975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:57.212 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.212 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:57.212 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:57.212 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.212 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.212 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.212 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:57.212 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.212 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:57.212 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:57.212 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:57.212 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.212 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.212 [2024-11-15 09:28:45.605379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:57.212 [2024-11-15 09:28:45.605591] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:57.471 [2024-11-15 09:28:45.712269] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.471 [2024-11-15 09:28:45.712341] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:57.471 [2024-11-15 09:28:45.712355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.471 BaseBdev2 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.471 [ 00:09:57.471 { 00:09:57.471 "name": "BaseBdev2", 00:09:57.471 "aliases": [ 00:09:57.471 "133bd2b6-7600-435e-bf75-5d205a592cc4" 00:09:57.471 ], 00:09:57.471 "product_name": "Malloc disk", 00:09:57.471 "block_size": 512, 00:09:57.471 "num_blocks": 65536, 00:09:57.471 "uuid": "133bd2b6-7600-435e-bf75-5d205a592cc4", 00:09:57.471 "assigned_rate_limits": { 00:09:57.471 "rw_ios_per_sec": 0, 00:09:57.471 "rw_mbytes_per_sec": 0, 00:09:57.471 "r_mbytes_per_sec": 0, 00:09:57.471 "w_mbytes_per_sec": 0 00:09:57.471 }, 00:09:57.471 "claimed": false, 00:09:57.471 "zoned": false, 00:09:57.471 "supported_io_types": { 00:09:57.471 "read": true, 00:09:57.471 "write": true, 00:09:57.471 "unmap": true, 00:09:57.471 "flush": true, 00:09:57.471 "reset": true, 00:09:57.471 "nvme_admin": false, 00:09:57.471 "nvme_io": false, 00:09:57.471 "nvme_io_md": false, 00:09:57.471 "write_zeroes": true, 00:09:57.471 "zcopy": true, 00:09:57.471 "get_zone_info": false, 00:09:57.471 "zone_management": false, 00:09:57.471 "zone_append": false, 00:09:57.471 "compare": false, 00:09:57.471 "compare_and_write": false, 00:09:57.471 "abort": true, 00:09:57.471 "seek_hole": false, 00:09:57.471 "seek_data": false, 00:09:57.471 "copy": true, 00:09:57.471 "nvme_iov_md": false 00:09:57.471 }, 00:09:57.471 "memory_domains": [ 00:09:57.471 { 00:09:57.471 "dma_device_id": "system", 00:09:57.471 "dma_device_type": 1 00:09:57.471 }, 00:09:57.471 { 00:09:57.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.471 "dma_device_type": 2 00:09:57.471 } 00:09:57.471 ], 00:09:57.471 "driver_specific": {} 00:09:57.471 } 00:09:57.471 ] 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.471 BaseBdev3 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.471 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.471 [ 00:09:57.471 { 00:09:57.471 "name": "BaseBdev3", 00:09:57.471 "aliases": [ 00:09:57.471 "fb9e4b4a-1e68-4f5d-86c0-c15fde1cb301" 00:09:57.471 ], 00:09:57.471 "product_name": "Malloc disk", 00:09:57.471 "block_size": 512, 00:09:57.471 "num_blocks": 65536, 00:09:57.471 "uuid": "fb9e4b4a-1e68-4f5d-86c0-c15fde1cb301", 00:09:57.471 "assigned_rate_limits": { 00:09:57.471 "rw_ios_per_sec": 0, 00:09:57.471 "rw_mbytes_per_sec": 0, 00:09:57.471 "r_mbytes_per_sec": 0, 00:09:57.471 "w_mbytes_per_sec": 0 00:09:57.471 }, 00:09:57.471 "claimed": false, 00:09:57.471 "zoned": false, 00:09:57.471 "supported_io_types": { 00:09:57.471 "read": true, 00:09:57.471 "write": true, 00:09:57.471 "unmap": true, 00:09:57.471 "flush": true, 00:09:57.471 "reset": true, 00:09:57.471 "nvme_admin": false, 00:09:57.471 "nvme_io": false, 00:09:57.471 "nvme_io_md": false, 00:09:57.471 "write_zeroes": true, 00:09:57.471 "zcopy": true, 00:09:57.471 "get_zone_info": false, 00:09:57.471 "zone_management": false, 00:09:57.471 "zone_append": false, 00:09:57.471 "compare": false, 00:09:57.471 "compare_and_write": false, 00:09:57.471 "abort": true, 00:09:57.471 "seek_hole": false, 00:09:57.471 "seek_data": false, 00:09:57.471 "copy": true, 00:09:57.471 "nvme_iov_md": false 00:09:57.471 }, 00:09:57.471 "memory_domains": [ 00:09:57.472 { 00:09:57.472 "dma_device_id": "system", 00:09:57.472 "dma_device_type": 1 00:09:57.472 }, 00:09:57.472 { 00:09:57.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.472 "dma_device_type": 2 00:09:57.472 } 00:09:57.472 ], 00:09:57.472 "driver_specific": {} 00:09:57.472 } 00:09:57.472 ] 00:09:57.472 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.472 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:57.472 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:57.472 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:57.472 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:57.472 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.472 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.731 [2024-11-15 09:28:45.938529] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:57.731 [2024-11-15 09:28:45.938687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:57.731 [2024-11-15 09:28:45.938736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.731 [2024-11-15 09:28:45.940747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:57.731 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.731 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:57.731 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.731 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.731 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.731 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.731 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.731 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.731 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.731 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.731 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.731 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.731 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.731 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.731 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.731 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.731 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.731 "name": "Existed_Raid", 00:09:57.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.731 "strip_size_kb": 0, 00:09:57.731 "state": "configuring", 00:09:57.731 "raid_level": "raid1", 00:09:57.731 "superblock": false, 00:09:57.731 "num_base_bdevs": 3, 00:09:57.731 "num_base_bdevs_discovered": 2, 00:09:57.731 "num_base_bdevs_operational": 3, 00:09:57.731 "base_bdevs_list": [ 00:09:57.731 { 00:09:57.731 "name": "BaseBdev1", 00:09:57.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.731 "is_configured": false, 00:09:57.731 "data_offset": 0, 00:09:57.731 "data_size": 0 00:09:57.731 }, 00:09:57.731 { 00:09:57.731 "name": "BaseBdev2", 00:09:57.731 "uuid": "133bd2b6-7600-435e-bf75-5d205a592cc4", 00:09:57.731 "is_configured": true, 00:09:57.731 "data_offset": 0, 00:09:57.731 "data_size": 65536 00:09:57.731 }, 00:09:57.731 { 00:09:57.731 "name": "BaseBdev3", 00:09:57.731 "uuid": "fb9e4b4a-1e68-4f5d-86c0-c15fde1cb301", 00:09:57.731 "is_configured": true, 00:09:57.731 "data_offset": 0, 00:09:57.731 "data_size": 65536 00:09:57.731 } 00:09:57.731 ] 00:09:57.731 }' 00:09:57.731 09:28:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.731 09:28:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.991 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:57.991 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.991 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.991 [2024-11-15 09:28:46.377845] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:57.991 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.991 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:57.991 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.991 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.991 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.991 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.991 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.991 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.991 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.991 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.991 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.991 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.991 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.991 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.991 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.991 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.991 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.991 "name": "Existed_Raid", 00:09:57.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.991 "strip_size_kb": 0, 00:09:57.991 "state": "configuring", 00:09:57.991 "raid_level": "raid1", 00:09:57.991 "superblock": false, 00:09:57.991 "num_base_bdevs": 3, 00:09:57.991 "num_base_bdevs_discovered": 1, 00:09:57.991 "num_base_bdevs_operational": 3, 00:09:57.991 "base_bdevs_list": [ 00:09:57.991 { 00:09:57.991 "name": "BaseBdev1", 00:09:57.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.991 "is_configured": false, 00:09:57.991 "data_offset": 0, 00:09:57.991 "data_size": 0 00:09:57.992 }, 00:09:57.992 { 00:09:57.992 "name": null, 00:09:57.992 "uuid": "133bd2b6-7600-435e-bf75-5d205a592cc4", 00:09:57.992 "is_configured": false, 00:09:57.992 "data_offset": 0, 00:09:57.992 "data_size": 65536 00:09:57.992 }, 00:09:57.992 { 00:09:57.992 "name": "BaseBdev3", 00:09:57.992 "uuid": "fb9e4b4a-1e68-4f5d-86c0-c15fde1cb301", 00:09:57.992 "is_configured": true, 00:09:57.992 "data_offset": 0, 00:09:57.992 "data_size": 65536 00:09:57.992 } 00:09:57.992 ] 00:09:57.992 }' 00:09:57.992 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.992 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.561 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:58.561 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.561 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.561 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.561 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.561 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:58.561 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:58.561 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.561 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.561 [2024-11-15 09:28:46.878014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.561 BaseBdev1 00:09:58.561 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.561 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:58.561 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:58.561 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:58.561 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:58.561 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:58.561 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:58.561 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:58.561 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.561 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.561 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.561 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:58.561 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.561 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.561 [ 00:09:58.561 { 00:09:58.561 "name": "BaseBdev1", 00:09:58.561 "aliases": [ 00:09:58.561 "9f49c96f-64fe-408b-bff2-fe335a777d32" 00:09:58.561 ], 00:09:58.561 "product_name": "Malloc disk", 00:09:58.561 "block_size": 512, 00:09:58.561 "num_blocks": 65536, 00:09:58.561 "uuid": "9f49c96f-64fe-408b-bff2-fe335a777d32", 00:09:58.561 "assigned_rate_limits": { 00:09:58.561 "rw_ios_per_sec": 0, 00:09:58.561 "rw_mbytes_per_sec": 0, 00:09:58.561 "r_mbytes_per_sec": 0, 00:09:58.561 "w_mbytes_per_sec": 0 00:09:58.561 }, 00:09:58.561 "claimed": true, 00:09:58.561 "claim_type": "exclusive_write", 00:09:58.561 "zoned": false, 00:09:58.562 "supported_io_types": { 00:09:58.562 "read": true, 00:09:58.562 "write": true, 00:09:58.562 "unmap": true, 00:09:58.562 "flush": true, 00:09:58.562 "reset": true, 00:09:58.562 "nvme_admin": false, 00:09:58.562 "nvme_io": false, 00:09:58.562 "nvme_io_md": false, 00:09:58.562 "write_zeroes": true, 00:09:58.562 "zcopy": true, 00:09:58.562 "get_zone_info": false, 00:09:58.562 "zone_management": false, 00:09:58.562 "zone_append": false, 00:09:58.562 "compare": false, 00:09:58.562 "compare_and_write": false, 00:09:58.562 "abort": true, 00:09:58.562 "seek_hole": false, 00:09:58.562 "seek_data": false, 00:09:58.562 "copy": true, 00:09:58.562 "nvme_iov_md": false 00:09:58.562 }, 00:09:58.562 "memory_domains": [ 00:09:58.562 { 00:09:58.562 "dma_device_id": "system", 00:09:58.562 "dma_device_type": 1 00:09:58.562 }, 00:09:58.562 { 00:09:58.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.562 "dma_device_type": 2 00:09:58.562 } 00:09:58.562 ], 00:09:58.562 "driver_specific": {} 00:09:58.562 } 00:09:58.562 ] 00:09:58.562 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.562 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:58.562 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:58.562 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.562 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.562 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.562 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.562 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.562 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.562 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.562 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.562 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.562 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.562 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.562 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.562 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.562 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.562 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.562 "name": "Existed_Raid", 00:09:58.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.562 "strip_size_kb": 0, 00:09:58.562 "state": "configuring", 00:09:58.562 "raid_level": "raid1", 00:09:58.562 "superblock": false, 00:09:58.562 "num_base_bdevs": 3, 00:09:58.562 "num_base_bdevs_discovered": 2, 00:09:58.562 "num_base_bdevs_operational": 3, 00:09:58.562 "base_bdevs_list": [ 00:09:58.562 { 00:09:58.562 "name": "BaseBdev1", 00:09:58.562 "uuid": "9f49c96f-64fe-408b-bff2-fe335a777d32", 00:09:58.562 "is_configured": true, 00:09:58.562 "data_offset": 0, 00:09:58.562 "data_size": 65536 00:09:58.562 }, 00:09:58.562 { 00:09:58.562 "name": null, 00:09:58.562 "uuid": "133bd2b6-7600-435e-bf75-5d205a592cc4", 00:09:58.562 "is_configured": false, 00:09:58.562 "data_offset": 0, 00:09:58.562 "data_size": 65536 00:09:58.562 }, 00:09:58.562 { 00:09:58.562 "name": "BaseBdev3", 00:09:58.562 "uuid": "fb9e4b4a-1e68-4f5d-86c0-c15fde1cb301", 00:09:58.562 "is_configured": true, 00:09:58.562 "data_offset": 0, 00:09:58.562 "data_size": 65536 00:09:58.562 } 00:09:58.562 ] 00:09:58.562 }' 00:09:58.562 09:28:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.562 09:28:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.130 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:59.130 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.130 09:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.130 09:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.130 09:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.130 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:59.130 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:59.130 09:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.130 09:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.130 [2024-11-15 09:28:47.449169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:59.130 09:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.130 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:59.130 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.130 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.130 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.130 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.130 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.130 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.130 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.130 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.130 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.130 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.131 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.131 09:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.131 09:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.131 09:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.131 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.131 "name": "Existed_Raid", 00:09:59.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.131 "strip_size_kb": 0, 00:09:59.131 "state": "configuring", 00:09:59.131 "raid_level": "raid1", 00:09:59.131 "superblock": false, 00:09:59.131 "num_base_bdevs": 3, 00:09:59.131 "num_base_bdevs_discovered": 1, 00:09:59.131 "num_base_bdevs_operational": 3, 00:09:59.131 "base_bdevs_list": [ 00:09:59.131 { 00:09:59.131 "name": "BaseBdev1", 00:09:59.131 "uuid": "9f49c96f-64fe-408b-bff2-fe335a777d32", 00:09:59.131 "is_configured": true, 00:09:59.131 "data_offset": 0, 00:09:59.131 "data_size": 65536 00:09:59.131 }, 00:09:59.131 { 00:09:59.131 "name": null, 00:09:59.131 "uuid": "133bd2b6-7600-435e-bf75-5d205a592cc4", 00:09:59.131 "is_configured": false, 00:09:59.131 "data_offset": 0, 00:09:59.131 "data_size": 65536 00:09:59.131 }, 00:09:59.131 { 00:09:59.131 "name": null, 00:09:59.131 "uuid": "fb9e4b4a-1e68-4f5d-86c0-c15fde1cb301", 00:09:59.131 "is_configured": false, 00:09:59.131 "data_offset": 0, 00:09:59.131 "data_size": 65536 00:09:59.131 } 00:09:59.131 ] 00:09:59.131 }' 00:09:59.131 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.131 09:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.698 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:59.698 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.698 09:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.698 09:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.698 09:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.698 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:59.698 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:59.698 09:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.698 09:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.698 [2024-11-15 09:28:47.972356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:59.698 09:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.698 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:59.698 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.698 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.698 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.698 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.698 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.698 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.698 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.698 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.698 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.698 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.698 09:28:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.698 09:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.698 09:28:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.698 09:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.698 09:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.698 "name": "Existed_Raid", 00:09:59.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.698 "strip_size_kb": 0, 00:09:59.698 "state": "configuring", 00:09:59.698 "raid_level": "raid1", 00:09:59.698 "superblock": false, 00:09:59.698 "num_base_bdevs": 3, 00:09:59.699 "num_base_bdevs_discovered": 2, 00:09:59.699 "num_base_bdevs_operational": 3, 00:09:59.699 "base_bdevs_list": [ 00:09:59.699 { 00:09:59.699 "name": "BaseBdev1", 00:09:59.699 "uuid": "9f49c96f-64fe-408b-bff2-fe335a777d32", 00:09:59.699 "is_configured": true, 00:09:59.699 "data_offset": 0, 00:09:59.699 "data_size": 65536 00:09:59.699 }, 00:09:59.699 { 00:09:59.699 "name": null, 00:09:59.699 "uuid": "133bd2b6-7600-435e-bf75-5d205a592cc4", 00:09:59.699 "is_configured": false, 00:09:59.699 "data_offset": 0, 00:09:59.699 "data_size": 65536 00:09:59.699 }, 00:09:59.699 { 00:09:59.699 "name": "BaseBdev3", 00:09:59.699 "uuid": "fb9e4b4a-1e68-4f5d-86c0-c15fde1cb301", 00:09:59.699 "is_configured": true, 00:09:59.699 "data_offset": 0, 00:09:59.699 "data_size": 65536 00:09:59.699 } 00:09:59.699 ] 00:09:59.699 }' 00:09:59.699 09:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.699 09:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.264 [2024-11-15 09:28:48.531412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.264 "name": "Existed_Raid", 00:10:00.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.264 "strip_size_kb": 0, 00:10:00.264 "state": "configuring", 00:10:00.264 "raid_level": "raid1", 00:10:00.264 "superblock": false, 00:10:00.264 "num_base_bdevs": 3, 00:10:00.264 "num_base_bdevs_discovered": 1, 00:10:00.264 "num_base_bdevs_operational": 3, 00:10:00.264 "base_bdevs_list": [ 00:10:00.264 { 00:10:00.264 "name": null, 00:10:00.264 "uuid": "9f49c96f-64fe-408b-bff2-fe335a777d32", 00:10:00.264 "is_configured": false, 00:10:00.264 "data_offset": 0, 00:10:00.264 "data_size": 65536 00:10:00.264 }, 00:10:00.264 { 00:10:00.264 "name": null, 00:10:00.264 "uuid": "133bd2b6-7600-435e-bf75-5d205a592cc4", 00:10:00.264 "is_configured": false, 00:10:00.264 "data_offset": 0, 00:10:00.264 "data_size": 65536 00:10:00.264 }, 00:10:00.264 { 00:10:00.264 "name": "BaseBdev3", 00:10:00.264 "uuid": "fb9e4b4a-1e68-4f5d-86c0-c15fde1cb301", 00:10:00.264 "is_configured": true, 00:10:00.264 "data_offset": 0, 00:10:00.264 "data_size": 65536 00:10:00.264 } 00:10:00.264 ] 00:10:00.264 }' 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.264 09:28:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.830 [2024-11-15 09:28:49.123198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.830 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.830 "name": "Existed_Raid", 00:10:00.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.830 "strip_size_kb": 0, 00:10:00.830 "state": "configuring", 00:10:00.830 "raid_level": "raid1", 00:10:00.830 "superblock": false, 00:10:00.830 "num_base_bdevs": 3, 00:10:00.830 "num_base_bdevs_discovered": 2, 00:10:00.830 "num_base_bdevs_operational": 3, 00:10:00.830 "base_bdevs_list": [ 00:10:00.830 { 00:10:00.830 "name": null, 00:10:00.830 "uuid": "9f49c96f-64fe-408b-bff2-fe335a777d32", 00:10:00.830 "is_configured": false, 00:10:00.830 "data_offset": 0, 00:10:00.830 "data_size": 65536 00:10:00.830 }, 00:10:00.830 { 00:10:00.830 "name": "BaseBdev2", 00:10:00.830 "uuid": "133bd2b6-7600-435e-bf75-5d205a592cc4", 00:10:00.830 "is_configured": true, 00:10:00.830 "data_offset": 0, 00:10:00.830 "data_size": 65536 00:10:00.830 }, 00:10:00.830 { 00:10:00.830 "name": "BaseBdev3", 00:10:00.830 "uuid": "fb9e4b4a-1e68-4f5d-86c0-c15fde1cb301", 00:10:00.830 "is_configured": true, 00:10:00.830 "data_offset": 0, 00:10:00.831 "data_size": 65536 00:10:00.831 } 00:10:00.831 ] 00:10:00.831 }' 00:10:00.831 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.831 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9f49c96f-64fe-408b-bff2-fe335a777d32 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.398 [2024-11-15 09:28:49.745727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:01.398 [2024-11-15 09:28:49.745786] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:01.398 [2024-11-15 09:28:49.745794] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:01.398 [2024-11-15 09:28:49.746074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:01.398 [2024-11-15 09:28:49.746234] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:01.398 [2024-11-15 09:28:49.746249] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:01.398 [2024-11-15 09:28:49.746496] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.398 NewBaseBdev 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.398 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.398 [ 00:10:01.398 { 00:10:01.398 "name": "NewBaseBdev", 00:10:01.398 "aliases": [ 00:10:01.398 "9f49c96f-64fe-408b-bff2-fe335a777d32" 00:10:01.398 ], 00:10:01.398 "product_name": "Malloc disk", 00:10:01.398 "block_size": 512, 00:10:01.398 "num_blocks": 65536, 00:10:01.398 "uuid": "9f49c96f-64fe-408b-bff2-fe335a777d32", 00:10:01.398 "assigned_rate_limits": { 00:10:01.398 "rw_ios_per_sec": 0, 00:10:01.398 "rw_mbytes_per_sec": 0, 00:10:01.398 "r_mbytes_per_sec": 0, 00:10:01.398 "w_mbytes_per_sec": 0 00:10:01.398 }, 00:10:01.398 "claimed": true, 00:10:01.398 "claim_type": "exclusive_write", 00:10:01.398 "zoned": false, 00:10:01.398 "supported_io_types": { 00:10:01.399 "read": true, 00:10:01.399 "write": true, 00:10:01.399 "unmap": true, 00:10:01.399 "flush": true, 00:10:01.399 "reset": true, 00:10:01.399 "nvme_admin": false, 00:10:01.399 "nvme_io": false, 00:10:01.399 "nvme_io_md": false, 00:10:01.399 "write_zeroes": true, 00:10:01.399 "zcopy": true, 00:10:01.399 "get_zone_info": false, 00:10:01.399 "zone_management": false, 00:10:01.399 "zone_append": false, 00:10:01.399 "compare": false, 00:10:01.399 "compare_and_write": false, 00:10:01.399 "abort": true, 00:10:01.399 "seek_hole": false, 00:10:01.399 "seek_data": false, 00:10:01.399 "copy": true, 00:10:01.399 "nvme_iov_md": false 00:10:01.399 }, 00:10:01.399 "memory_domains": [ 00:10:01.399 { 00:10:01.399 "dma_device_id": "system", 00:10:01.399 "dma_device_type": 1 00:10:01.399 }, 00:10:01.399 { 00:10:01.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.399 "dma_device_type": 2 00:10:01.399 } 00:10:01.399 ], 00:10:01.399 "driver_specific": {} 00:10:01.399 } 00:10:01.399 ] 00:10:01.399 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.399 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:01.399 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:01.399 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.399 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.399 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.399 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.399 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.399 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.399 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.399 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.399 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.399 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.399 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.399 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.399 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.399 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.399 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.399 "name": "Existed_Raid", 00:10:01.399 "uuid": "d8bd3a15-bd3f-4fc7-8bbc-f37eafc202ad", 00:10:01.399 "strip_size_kb": 0, 00:10:01.399 "state": "online", 00:10:01.399 "raid_level": "raid1", 00:10:01.399 "superblock": false, 00:10:01.399 "num_base_bdevs": 3, 00:10:01.399 "num_base_bdevs_discovered": 3, 00:10:01.399 "num_base_bdevs_operational": 3, 00:10:01.399 "base_bdevs_list": [ 00:10:01.399 { 00:10:01.399 "name": "NewBaseBdev", 00:10:01.399 "uuid": "9f49c96f-64fe-408b-bff2-fe335a777d32", 00:10:01.399 "is_configured": true, 00:10:01.399 "data_offset": 0, 00:10:01.399 "data_size": 65536 00:10:01.399 }, 00:10:01.399 { 00:10:01.399 "name": "BaseBdev2", 00:10:01.399 "uuid": "133bd2b6-7600-435e-bf75-5d205a592cc4", 00:10:01.399 "is_configured": true, 00:10:01.399 "data_offset": 0, 00:10:01.399 "data_size": 65536 00:10:01.399 }, 00:10:01.399 { 00:10:01.399 "name": "BaseBdev3", 00:10:01.399 "uuid": "fb9e4b4a-1e68-4f5d-86c0-c15fde1cb301", 00:10:01.399 "is_configured": true, 00:10:01.399 "data_offset": 0, 00:10:01.399 "data_size": 65536 00:10:01.399 } 00:10:01.399 ] 00:10:01.399 }' 00:10:01.399 09:28:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.399 09:28:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.967 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:01.967 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:01.967 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:01.967 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:01.967 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:01.967 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:01.967 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:01.967 09:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.967 09:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.967 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:01.967 [2024-11-15 09:28:50.277287] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.967 09:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.967 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.967 "name": "Existed_Raid", 00:10:01.967 "aliases": [ 00:10:01.967 "d8bd3a15-bd3f-4fc7-8bbc-f37eafc202ad" 00:10:01.967 ], 00:10:01.967 "product_name": "Raid Volume", 00:10:01.967 "block_size": 512, 00:10:01.967 "num_blocks": 65536, 00:10:01.967 "uuid": "d8bd3a15-bd3f-4fc7-8bbc-f37eafc202ad", 00:10:01.967 "assigned_rate_limits": { 00:10:01.967 "rw_ios_per_sec": 0, 00:10:01.967 "rw_mbytes_per_sec": 0, 00:10:01.967 "r_mbytes_per_sec": 0, 00:10:01.967 "w_mbytes_per_sec": 0 00:10:01.967 }, 00:10:01.967 "claimed": false, 00:10:01.967 "zoned": false, 00:10:01.967 "supported_io_types": { 00:10:01.967 "read": true, 00:10:01.967 "write": true, 00:10:01.967 "unmap": false, 00:10:01.967 "flush": false, 00:10:01.967 "reset": true, 00:10:01.967 "nvme_admin": false, 00:10:01.967 "nvme_io": false, 00:10:01.967 "nvme_io_md": false, 00:10:01.967 "write_zeroes": true, 00:10:01.967 "zcopy": false, 00:10:01.967 "get_zone_info": false, 00:10:01.967 "zone_management": false, 00:10:01.967 "zone_append": false, 00:10:01.967 "compare": false, 00:10:01.967 "compare_and_write": false, 00:10:01.967 "abort": false, 00:10:01.967 "seek_hole": false, 00:10:01.967 "seek_data": false, 00:10:01.967 "copy": false, 00:10:01.967 "nvme_iov_md": false 00:10:01.967 }, 00:10:01.967 "memory_domains": [ 00:10:01.967 { 00:10:01.967 "dma_device_id": "system", 00:10:01.967 "dma_device_type": 1 00:10:01.967 }, 00:10:01.967 { 00:10:01.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.967 "dma_device_type": 2 00:10:01.967 }, 00:10:01.967 { 00:10:01.967 "dma_device_id": "system", 00:10:01.967 "dma_device_type": 1 00:10:01.967 }, 00:10:01.967 { 00:10:01.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.967 "dma_device_type": 2 00:10:01.967 }, 00:10:01.967 { 00:10:01.967 "dma_device_id": "system", 00:10:01.968 "dma_device_type": 1 00:10:01.968 }, 00:10:01.968 { 00:10:01.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.968 "dma_device_type": 2 00:10:01.968 } 00:10:01.968 ], 00:10:01.968 "driver_specific": { 00:10:01.968 "raid": { 00:10:01.968 "uuid": "d8bd3a15-bd3f-4fc7-8bbc-f37eafc202ad", 00:10:01.968 "strip_size_kb": 0, 00:10:01.968 "state": "online", 00:10:01.968 "raid_level": "raid1", 00:10:01.968 "superblock": false, 00:10:01.968 "num_base_bdevs": 3, 00:10:01.968 "num_base_bdevs_discovered": 3, 00:10:01.968 "num_base_bdevs_operational": 3, 00:10:01.968 "base_bdevs_list": [ 00:10:01.968 { 00:10:01.968 "name": "NewBaseBdev", 00:10:01.968 "uuid": "9f49c96f-64fe-408b-bff2-fe335a777d32", 00:10:01.968 "is_configured": true, 00:10:01.968 "data_offset": 0, 00:10:01.968 "data_size": 65536 00:10:01.968 }, 00:10:01.968 { 00:10:01.968 "name": "BaseBdev2", 00:10:01.968 "uuid": "133bd2b6-7600-435e-bf75-5d205a592cc4", 00:10:01.968 "is_configured": true, 00:10:01.968 "data_offset": 0, 00:10:01.968 "data_size": 65536 00:10:01.968 }, 00:10:01.968 { 00:10:01.968 "name": "BaseBdev3", 00:10:01.968 "uuid": "fb9e4b4a-1e68-4f5d-86c0-c15fde1cb301", 00:10:01.968 "is_configured": true, 00:10:01.968 "data_offset": 0, 00:10:01.968 "data_size": 65536 00:10:01.968 } 00:10:01.968 ] 00:10:01.968 } 00:10:01.968 } 00:10:01.968 }' 00:10:01.968 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.968 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:01.968 BaseBdev2 00:10:01.968 BaseBdev3' 00:10:01.968 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.968 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:01.968 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.968 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:01.968 09:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.968 09:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.968 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.229 09:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.229 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.229 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.229 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.229 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:02.229 09:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.229 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.229 09:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.229 09:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.229 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.229 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.229 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.229 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:02.229 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.229 09:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.229 09:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.229 09:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.229 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.229 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.230 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.230 09:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.230 09:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.230 [2024-11-15 09:28:50.580415] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.230 [2024-11-15 09:28:50.580456] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.230 [2024-11-15 09:28:50.580550] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.230 [2024-11-15 09:28:50.580869] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.230 [2024-11-15 09:28:50.580888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:02.230 09:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.230 09:28:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67723 00:10:02.230 09:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 67723 ']' 00:10:02.230 09:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 67723 00:10:02.230 09:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:10:02.230 09:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:02.230 09:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67723 00:10:02.230 09:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:02.230 09:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:02.230 killing process with pid 67723 00:10:02.230 09:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67723' 00:10:02.230 09:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 67723 00:10:02.230 [2024-11-15 09:28:50.629194] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.230 09:28:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 67723 00:10:02.809 [2024-11-15 09:28:50.986586] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:04.183 00:10:04.183 real 0m11.373s 00:10:04.183 user 0m17.848s 00:10:04.183 sys 0m2.068s 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.183 ************************************ 00:10:04.183 END TEST raid_state_function_test 00:10:04.183 ************************************ 00:10:04.183 09:28:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:04.183 09:28:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:04.183 09:28:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:04.183 09:28:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:04.183 ************************************ 00:10:04.183 START TEST raid_state_function_test_sb 00:10:04.183 ************************************ 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 true 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68355 00:10:04.183 Process raid pid: 68355 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68355' 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68355 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 68355 ']' 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:04.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:04.183 09:28:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.183 [2024-11-15 09:28:52.486335] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:10:04.183 [2024-11-15 09:28:52.486524] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.443 [2024-11-15 09:28:52.679889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.443 [2024-11-15 09:28:52.804256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.702 [2024-11-15 09:28:53.035602] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.702 [2024-11-15 09:28:53.035642] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.961 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:04.961 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:10:04.961 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:04.961 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.961 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.961 [2024-11-15 09:28:53.409524] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:04.961 [2024-11-15 09:28:53.409586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:04.961 [2024-11-15 09:28:53.409596] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:04.961 [2024-11-15 09:28:53.409606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:04.961 [2024-11-15 09:28:53.409613] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:04.961 [2024-11-15 09:28:53.409622] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:04.961 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.961 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:04.961 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.961 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.961 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.961 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.961 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.961 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.961 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.961 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.961 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.961 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.961 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.961 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.961 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.221 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.221 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.221 "name": "Existed_Raid", 00:10:05.221 "uuid": "92781d67-ad8f-4acd-93d8-0b1f1ed897a8", 00:10:05.221 "strip_size_kb": 0, 00:10:05.221 "state": "configuring", 00:10:05.221 "raid_level": "raid1", 00:10:05.221 "superblock": true, 00:10:05.221 "num_base_bdevs": 3, 00:10:05.221 "num_base_bdevs_discovered": 0, 00:10:05.221 "num_base_bdevs_operational": 3, 00:10:05.221 "base_bdevs_list": [ 00:10:05.221 { 00:10:05.221 "name": "BaseBdev1", 00:10:05.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.221 "is_configured": false, 00:10:05.221 "data_offset": 0, 00:10:05.221 "data_size": 0 00:10:05.221 }, 00:10:05.221 { 00:10:05.221 "name": "BaseBdev2", 00:10:05.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.221 "is_configured": false, 00:10:05.221 "data_offset": 0, 00:10:05.221 "data_size": 0 00:10:05.221 }, 00:10:05.221 { 00:10:05.221 "name": "BaseBdev3", 00:10:05.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.221 "is_configured": false, 00:10:05.221 "data_offset": 0, 00:10:05.221 "data_size": 0 00:10:05.221 } 00:10:05.221 ] 00:10:05.221 }' 00:10:05.221 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.221 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.480 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:05.480 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.480 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.480 [2024-11-15 09:28:53.892680] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:05.480 [2024-11-15 09:28:53.892739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:05.480 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.480 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:05.480 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.480 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.480 [2024-11-15 09:28:53.904645] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:05.480 [2024-11-15 09:28:53.904702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:05.480 [2024-11-15 09:28:53.904713] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.480 [2024-11-15 09:28:53.904724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.480 [2024-11-15 09:28:53.904731] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:05.480 [2024-11-15 09:28:53.904741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:05.480 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.480 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:05.480 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.480 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.740 [2024-11-15 09:28:53.955757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.740 BaseBdev1 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.740 [ 00:10:05.740 { 00:10:05.740 "name": "BaseBdev1", 00:10:05.740 "aliases": [ 00:10:05.740 "767722b1-5e92-4f6a-83c5-30ab06bb5b33" 00:10:05.740 ], 00:10:05.740 "product_name": "Malloc disk", 00:10:05.740 "block_size": 512, 00:10:05.740 "num_blocks": 65536, 00:10:05.740 "uuid": "767722b1-5e92-4f6a-83c5-30ab06bb5b33", 00:10:05.740 "assigned_rate_limits": { 00:10:05.740 "rw_ios_per_sec": 0, 00:10:05.740 "rw_mbytes_per_sec": 0, 00:10:05.740 "r_mbytes_per_sec": 0, 00:10:05.740 "w_mbytes_per_sec": 0 00:10:05.740 }, 00:10:05.740 "claimed": true, 00:10:05.740 "claim_type": "exclusive_write", 00:10:05.740 "zoned": false, 00:10:05.740 "supported_io_types": { 00:10:05.740 "read": true, 00:10:05.740 "write": true, 00:10:05.740 "unmap": true, 00:10:05.740 "flush": true, 00:10:05.740 "reset": true, 00:10:05.740 "nvme_admin": false, 00:10:05.740 "nvme_io": false, 00:10:05.740 "nvme_io_md": false, 00:10:05.740 "write_zeroes": true, 00:10:05.740 "zcopy": true, 00:10:05.740 "get_zone_info": false, 00:10:05.740 "zone_management": false, 00:10:05.740 "zone_append": false, 00:10:05.740 "compare": false, 00:10:05.740 "compare_and_write": false, 00:10:05.740 "abort": true, 00:10:05.740 "seek_hole": false, 00:10:05.740 "seek_data": false, 00:10:05.740 "copy": true, 00:10:05.740 "nvme_iov_md": false 00:10:05.740 }, 00:10:05.740 "memory_domains": [ 00:10:05.740 { 00:10:05.740 "dma_device_id": "system", 00:10:05.740 "dma_device_type": 1 00:10:05.740 }, 00:10:05.740 { 00:10:05.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.740 "dma_device_type": 2 00:10:05.740 } 00:10:05.740 ], 00:10:05.740 "driver_specific": {} 00:10:05.740 } 00:10:05.740 ] 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.740 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.741 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.741 09:28:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.741 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.741 09:28:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.741 09:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.741 09:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.741 "name": "Existed_Raid", 00:10:05.741 "uuid": "af7f4329-4248-4ac0-9165-21956392326e", 00:10:05.741 "strip_size_kb": 0, 00:10:05.741 "state": "configuring", 00:10:05.741 "raid_level": "raid1", 00:10:05.741 "superblock": true, 00:10:05.741 "num_base_bdevs": 3, 00:10:05.741 "num_base_bdevs_discovered": 1, 00:10:05.741 "num_base_bdevs_operational": 3, 00:10:05.741 "base_bdevs_list": [ 00:10:05.741 { 00:10:05.741 "name": "BaseBdev1", 00:10:05.741 "uuid": "767722b1-5e92-4f6a-83c5-30ab06bb5b33", 00:10:05.741 "is_configured": true, 00:10:05.741 "data_offset": 2048, 00:10:05.741 "data_size": 63488 00:10:05.741 }, 00:10:05.741 { 00:10:05.741 "name": "BaseBdev2", 00:10:05.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.741 "is_configured": false, 00:10:05.741 "data_offset": 0, 00:10:05.741 "data_size": 0 00:10:05.741 }, 00:10:05.741 { 00:10:05.741 "name": "BaseBdev3", 00:10:05.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.741 "is_configured": false, 00:10:05.741 "data_offset": 0, 00:10:05.741 "data_size": 0 00:10:05.741 } 00:10:05.741 ] 00:10:05.741 }' 00:10:05.741 09:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.741 09:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.000 09:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:06.000 09:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.000 09:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.260 [2024-11-15 09:28:54.470978] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:06.260 [2024-11-15 09:28:54.471055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:06.260 09:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.260 09:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:06.260 09:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.260 09:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.260 [2024-11-15 09:28:54.482987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.260 [2024-11-15 09:28:54.484978] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.260 [2024-11-15 09:28:54.485022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:06.260 [2024-11-15 09:28:54.485033] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:06.260 [2024-11-15 09:28:54.485044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:06.260 09:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.260 09:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:06.260 09:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.260 09:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:06.260 09:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.260 09:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.260 09:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.260 09:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.261 09:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.261 09:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.261 09:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.261 09:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.261 09:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.261 09:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.261 09:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.261 09:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.261 09:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.261 09:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.261 09:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.261 "name": "Existed_Raid", 00:10:06.261 "uuid": "2ec1859e-7289-41aa-ae5b-1e3b420391db", 00:10:06.261 "strip_size_kb": 0, 00:10:06.261 "state": "configuring", 00:10:06.261 "raid_level": "raid1", 00:10:06.261 "superblock": true, 00:10:06.261 "num_base_bdevs": 3, 00:10:06.261 "num_base_bdevs_discovered": 1, 00:10:06.261 "num_base_bdevs_operational": 3, 00:10:06.261 "base_bdevs_list": [ 00:10:06.261 { 00:10:06.261 "name": "BaseBdev1", 00:10:06.261 "uuid": "767722b1-5e92-4f6a-83c5-30ab06bb5b33", 00:10:06.261 "is_configured": true, 00:10:06.261 "data_offset": 2048, 00:10:06.261 "data_size": 63488 00:10:06.261 }, 00:10:06.261 { 00:10:06.261 "name": "BaseBdev2", 00:10:06.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.261 "is_configured": false, 00:10:06.261 "data_offset": 0, 00:10:06.261 "data_size": 0 00:10:06.261 }, 00:10:06.261 { 00:10:06.261 "name": "BaseBdev3", 00:10:06.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.261 "is_configured": false, 00:10:06.261 "data_offset": 0, 00:10:06.261 "data_size": 0 00:10:06.261 } 00:10:06.261 ] 00:10:06.261 }' 00:10:06.261 09:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.261 09:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.520 09:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:06.520 09:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.520 09:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.779 [2024-11-15 09:28:54.989551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.779 BaseBdev2 00:10:06.779 09:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.779 09:28:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:06.779 09:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:06.779 09:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:06.779 09:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:06.779 09:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:06.779 09:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:06.779 09:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:06.779 09:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.779 09:28:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.779 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.779 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:06.779 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.779 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.779 [ 00:10:06.779 { 00:10:06.779 "name": "BaseBdev2", 00:10:06.779 "aliases": [ 00:10:06.779 "aae58a69-cd65-4f9c-8709-9b5b58e33add" 00:10:06.779 ], 00:10:06.779 "product_name": "Malloc disk", 00:10:06.779 "block_size": 512, 00:10:06.779 "num_blocks": 65536, 00:10:06.779 "uuid": "aae58a69-cd65-4f9c-8709-9b5b58e33add", 00:10:06.779 "assigned_rate_limits": { 00:10:06.779 "rw_ios_per_sec": 0, 00:10:06.779 "rw_mbytes_per_sec": 0, 00:10:06.779 "r_mbytes_per_sec": 0, 00:10:06.779 "w_mbytes_per_sec": 0 00:10:06.779 }, 00:10:06.779 "claimed": true, 00:10:06.779 "claim_type": "exclusive_write", 00:10:06.779 "zoned": false, 00:10:06.779 "supported_io_types": { 00:10:06.779 "read": true, 00:10:06.779 "write": true, 00:10:06.779 "unmap": true, 00:10:06.779 "flush": true, 00:10:06.779 "reset": true, 00:10:06.779 "nvme_admin": false, 00:10:06.779 "nvme_io": false, 00:10:06.779 "nvme_io_md": false, 00:10:06.779 "write_zeroes": true, 00:10:06.779 "zcopy": true, 00:10:06.779 "get_zone_info": false, 00:10:06.779 "zone_management": false, 00:10:06.779 "zone_append": false, 00:10:06.779 "compare": false, 00:10:06.779 "compare_and_write": false, 00:10:06.779 "abort": true, 00:10:06.779 "seek_hole": false, 00:10:06.779 "seek_data": false, 00:10:06.779 "copy": true, 00:10:06.779 "nvme_iov_md": false 00:10:06.779 }, 00:10:06.779 "memory_domains": [ 00:10:06.779 { 00:10:06.779 "dma_device_id": "system", 00:10:06.779 "dma_device_type": 1 00:10:06.779 }, 00:10:06.779 { 00:10:06.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.779 "dma_device_type": 2 00:10:06.779 } 00:10:06.779 ], 00:10:06.779 "driver_specific": {} 00:10:06.779 } 00:10:06.779 ] 00:10:06.779 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.779 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:06.779 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:06.779 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.779 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:06.779 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.779 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.779 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.779 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.779 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.780 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.780 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.780 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.780 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.780 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.780 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.780 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.780 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.780 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.780 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.780 "name": "Existed_Raid", 00:10:06.780 "uuid": "2ec1859e-7289-41aa-ae5b-1e3b420391db", 00:10:06.780 "strip_size_kb": 0, 00:10:06.780 "state": "configuring", 00:10:06.780 "raid_level": "raid1", 00:10:06.780 "superblock": true, 00:10:06.780 "num_base_bdevs": 3, 00:10:06.780 "num_base_bdevs_discovered": 2, 00:10:06.780 "num_base_bdevs_operational": 3, 00:10:06.780 "base_bdevs_list": [ 00:10:06.780 { 00:10:06.780 "name": "BaseBdev1", 00:10:06.780 "uuid": "767722b1-5e92-4f6a-83c5-30ab06bb5b33", 00:10:06.780 "is_configured": true, 00:10:06.780 "data_offset": 2048, 00:10:06.780 "data_size": 63488 00:10:06.780 }, 00:10:06.780 { 00:10:06.780 "name": "BaseBdev2", 00:10:06.780 "uuid": "aae58a69-cd65-4f9c-8709-9b5b58e33add", 00:10:06.780 "is_configured": true, 00:10:06.780 "data_offset": 2048, 00:10:06.780 "data_size": 63488 00:10:06.780 }, 00:10:06.780 { 00:10:06.780 "name": "BaseBdev3", 00:10:06.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.780 "is_configured": false, 00:10:06.780 "data_offset": 0, 00:10:06.780 "data_size": 0 00:10:06.780 } 00:10:06.780 ] 00:10:06.780 }' 00:10:06.780 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.780 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.039 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:07.039 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.039 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.298 [2024-11-15 09:28:55.515146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:07.298 [2024-11-15 09:28:55.515450] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:07.298 [2024-11-15 09:28:55.515484] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:07.298 [2024-11-15 09:28:55.515786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:07.298 [2024-11-15 09:28:55.515989] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:07.298 [2024-11-15 09:28:55.516017] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:07.298 BaseBdev3 00:10:07.298 [2024-11-15 09:28:55.516183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.298 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.298 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:07.298 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:07.298 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:07.298 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:07.298 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:07.298 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:07.298 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:07.298 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.298 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.298 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.298 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:07.298 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.298 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.298 [ 00:10:07.298 { 00:10:07.298 "name": "BaseBdev3", 00:10:07.298 "aliases": [ 00:10:07.298 "6fa8f727-f272-4339-bc90-72e63073d0a2" 00:10:07.298 ], 00:10:07.298 "product_name": "Malloc disk", 00:10:07.298 "block_size": 512, 00:10:07.298 "num_blocks": 65536, 00:10:07.298 "uuid": "6fa8f727-f272-4339-bc90-72e63073d0a2", 00:10:07.298 "assigned_rate_limits": { 00:10:07.298 "rw_ios_per_sec": 0, 00:10:07.298 "rw_mbytes_per_sec": 0, 00:10:07.298 "r_mbytes_per_sec": 0, 00:10:07.298 "w_mbytes_per_sec": 0 00:10:07.298 }, 00:10:07.298 "claimed": true, 00:10:07.298 "claim_type": "exclusive_write", 00:10:07.298 "zoned": false, 00:10:07.298 "supported_io_types": { 00:10:07.298 "read": true, 00:10:07.298 "write": true, 00:10:07.298 "unmap": true, 00:10:07.298 "flush": true, 00:10:07.298 "reset": true, 00:10:07.299 "nvme_admin": false, 00:10:07.299 "nvme_io": false, 00:10:07.299 "nvme_io_md": false, 00:10:07.299 "write_zeroes": true, 00:10:07.299 "zcopy": true, 00:10:07.299 "get_zone_info": false, 00:10:07.299 "zone_management": false, 00:10:07.299 "zone_append": false, 00:10:07.299 "compare": false, 00:10:07.299 "compare_and_write": false, 00:10:07.299 "abort": true, 00:10:07.299 "seek_hole": false, 00:10:07.299 "seek_data": false, 00:10:07.299 "copy": true, 00:10:07.299 "nvme_iov_md": false 00:10:07.299 }, 00:10:07.299 "memory_domains": [ 00:10:07.299 { 00:10:07.299 "dma_device_id": "system", 00:10:07.299 "dma_device_type": 1 00:10:07.299 }, 00:10:07.299 { 00:10:07.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.299 "dma_device_type": 2 00:10:07.299 } 00:10:07.299 ], 00:10:07.299 "driver_specific": {} 00:10:07.299 } 00:10:07.299 ] 00:10:07.299 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.299 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:07.299 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:07.299 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:07.299 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:07.299 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.299 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.299 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.299 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.299 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.299 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.299 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.299 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.299 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.299 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.299 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.299 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.299 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.299 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.299 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.299 "name": "Existed_Raid", 00:10:07.299 "uuid": "2ec1859e-7289-41aa-ae5b-1e3b420391db", 00:10:07.299 "strip_size_kb": 0, 00:10:07.299 "state": "online", 00:10:07.299 "raid_level": "raid1", 00:10:07.299 "superblock": true, 00:10:07.299 "num_base_bdevs": 3, 00:10:07.299 "num_base_bdevs_discovered": 3, 00:10:07.299 "num_base_bdevs_operational": 3, 00:10:07.299 "base_bdevs_list": [ 00:10:07.299 { 00:10:07.299 "name": "BaseBdev1", 00:10:07.299 "uuid": "767722b1-5e92-4f6a-83c5-30ab06bb5b33", 00:10:07.299 "is_configured": true, 00:10:07.299 "data_offset": 2048, 00:10:07.299 "data_size": 63488 00:10:07.299 }, 00:10:07.299 { 00:10:07.299 "name": "BaseBdev2", 00:10:07.299 "uuid": "aae58a69-cd65-4f9c-8709-9b5b58e33add", 00:10:07.299 "is_configured": true, 00:10:07.299 "data_offset": 2048, 00:10:07.299 "data_size": 63488 00:10:07.299 }, 00:10:07.299 { 00:10:07.299 "name": "BaseBdev3", 00:10:07.299 "uuid": "6fa8f727-f272-4339-bc90-72e63073d0a2", 00:10:07.299 "is_configured": true, 00:10:07.299 "data_offset": 2048, 00:10:07.299 "data_size": 63488 00:10:07.299 } 00:10:07.299 ] 00:10:07.299 }' 00:10:07.299 09:28:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.299 09:28:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.558 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:07.558 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:07.558 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:07.558 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:07.558 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:07.558 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:07.558 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:07.558 09:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.558 09:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.558 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:07.558 [2024-11-15 09:28:56.018743] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:07.817 09:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.817 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:07.817 "name": "Existed_Raid", 00:10:07.817 "aliases": [ 00:10:07.817 "2ec1859e-7289-41aa-ae5b-1e3b420391db" 00:10:07.817 ], 00:10:07.817 "product_name": "Raid Volume", 00:10:07.817 "block_size": 512, 00:10:07.817 "num_blocks": 63488, 00:10:07.817 "uuid": "2ec1859e-7289-41aa-ae5b-1e3b420391db", 00:10:07.817 "assigned_rate_limits": { 00:10:07.817 "rw_ios_per_sec": 0, 00:10:07.818 "rw_mbytes_per_sec": 0, 00:10:07.818 "r_mbytes_per_sec": 0, 00:10:07.818 "w_mbytes_per_sec": 0 00:10:07.818 }, 00:10:07.818 "claimed": false, 00:10:07.818 "zoned": false, 00:10:07.818 "supported_io_types": { 00:10:07.818 "read": true, 00:10:07.818 "write": true, 00:10:07.818 "unmap": false, 00:10:07.818 "flush": false, 00:10:07.818 "reset": true, 00:10:07.818 "nvme_admin": false, 00:10:07.818 "nvme_io": false, 00:10:07.818 "nvme_io_md": false, 00:10:07.818 "write_zeroes": true, 00:10:07.818 "zcopy": false, 00:10:07.818 "get_zone_info": false, 00:10:07.818 "zone_management": false, 00:10:07.818 "zone_append": false, 00:10:07.818 "compare": false, 00:10:07.818 "compare_and_write": false, 00:10:07.818 "abort": false, 00:10:07.818 "seek_hole": false, 00:10:07.818 "seek_data": false, 00:10:07.818 "copy": false, 00:10:07.818 "nvme_iov_md": false 00:10:07.818 }, 00:10:07.818 "memory_domains": [ 00:10:07.818 { 00:10:07.818 "dma_device_id": "system", 00:10:07.818 "dma_device_type": 1 00:10:07.818 }, 00:10:07.818 { 00:10:07.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.818 "dma_device_type": 2 00:10:07.818 }, 00:10:07.818 { 00:10:07.818 "dma_device_id": "system", 00:10:07.818 "dma_device_type": 1 00:10:07.818 }, 00:10:07.818 { 00:10:07.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.818 "dma_device_type": 2 00:10:07.818 }, 00:10:07.818 { 00:10:07.818 "dma_device_id": "system", 00:10:07.818 "dma_device_type": 1 00:10:07.818 }, 00:10:07.818 { 00:10:07.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.818 "dma_device_type": 2 00:10:07.818 } 00:10:07.818 ], 00:10:07.818 "driver_specific": { 00:10:07.818 "raid": { 00:10:07.818 "uuid": "2ec1859e-7289-41aa-ae5b-1e3b420391db", 00:10:07.818 "strip_size_kb": 0, 00:10:07.818 "state": "online", 00:10:07.818 "raid_level": "raid1", 00:10:07.818 "superblock": true, 00:10:07.818 "num_base_bdevs": 3, 00:10:07.818 "num_base_bdevs_discovered": 3, 00:10:07.818 "num_base_bdevs_operational": 3, 00:10:07.818 "base_bdevs_list": [ 00:10:07.818 { 00:10:07.818 "name": "BaseBdev1", 00:10:07.818 "uuid": "767722b1-5e92-4f6a-83c5-30ab06bb5b33", 00:10:07.818 "is_configured": true, 00:10:07.818 "data_offset": 2048, 00:10:07.818 "data_size": 63488 00:10:07.818 }, 00:10:07.818 { 00:10:07.818 "name": "BaseBdev2", 00:10:07.818 "uuid": "aae58a69-cd65-4f9c-8709-9b5b58e33add", 00:10:07.818 "is_configured": true, 00:10:07.818 "data_offset": 2048, 00:10:07.818 "data_size": 63488 00:10:07.818 }, 00:10:07.818 { 00:10:07.818 "name": "BaseBdev3", 00:10:07.818 "uuid": "6fa8f727-f272-4339-bc90-72e63073d0a2", 00:10:07.818 "is_configured": true, 00:10:07.818 "data_offset": 2048, 00:10:07.818 "data_size": 63488 00:10:07.818 } 00:10:07.818 ] 00:10:07.818 } 00:10:07.818 } 00:10:07.818 }' 00:10:07.818 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:07.818 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:07.818 BaseBdev2 00:10:07.818 BaseBdev3' 00:10:07.818 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.818 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:07.818 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.818 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:07.818 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.818 09:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.818 09:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.818 09:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.818 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.818 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.818 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.818 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:07.818 09:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.818 09:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.818 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.818 09:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.818 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.818 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.818 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.818 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:07.818 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.818 09:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.818 09:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.077 09:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.077 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.077 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.077 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:08.077 09:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.077 09:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.077 [2024-11-15 09:28:56.318011] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:08.077 09:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.077 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:08.077 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:08.077 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:08.077 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:08.077 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:08.077 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:08.077 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.077 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.077 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.078 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.078 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:08.078 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.078 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.078 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.078 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.078 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.078 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.078 09:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.078 09:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.078 09:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.078 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.078 "name": "Existed_Raid", 00:10:08.078 "uuid": "2ec1859e-7289-41aa-ae5b-1e3b420391db", 00:10:08.078 "strip_size_kb": 0, 00:10:08.078 "state": "online", 00:10:08.078 "raid_level": "raid1", 00:10:08.078 "superblock": true, 00:10:08.078 "num_base_bdevs": 3, 00:10:08.078 "num_base_bdevs_discovered": 2, 00:10:08.078 "num_base_bdevs_operational": 2, 00:10:08.078 "base_bdevs_list": [ 00:10:08.078 { 00:10:08.078 "name": null, 00:10:08.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.078 "is_configured": false, 00:10:08.078 "data_offset": 0, 00:10:08.078 "data_size": 63488 00:10:08.078 }, 00:10:08.078 { 00:10:08.078 "name": "BaseBdev2", 00:10:08.078 "uuid": "aae58a69-cd65-4f9c-8709-9b5b58e33add", 00:10:08.078 "is_configured": true, 00:10:08.078 "data_offset": 2048, 00:10:08.078 "data_size": 63488 00:10:08.078 }, 00:10:08.078 { 00:10:08.078 "name": "BaseBdev3", 00:10:08.078 "uuid": "6fa8f727-f272-4339-bc90-72e63073d0a2", 00:10:08.078 "is_configured": true, 00:10:08.078 "data_offset": 2048, 00:10:08.078 "data_size": 63488 00:10:08.078 } 00:10:08.078 ] 00:10:08.078 }' 00:10:08.078 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.078 09:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.660 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:08.660 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.660 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.660 09:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.660 09:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.660 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:08.660 09:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.660 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:08.660 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:08.660 09:28:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:08.660 09:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.660 09:28:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.660 [2024-11-15 09:28:56.964713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:08.660 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.660 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:08.660 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.660 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.660 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.660 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.660 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:08.660 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.920 [2024-11-15 09:28:57.136297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:08.920 [2024-11-15 09:28:57.136531] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:08.920 [2024-11-15 09:28:57.241113] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.920 [2024-11-15 09:28:57.241263] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:08.920 [2024-11-15 09:28:57.241313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.920 BaseBdev2 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.920 [ 00:10:08.920 { 00:10:08.920 "name": "BaseBdev2", 00:10:08.920 "aliases": [ 00:10:08.920 "f3f09d87-2f82-4a41-8eb9-8bd2a4eb5f97" 00:10:08.920 ], 00:10:08.920 "product_name": "Malloc disk", 00:10:08.920 "block_size": 512, 00:10:08.920 "num_blocks": 65536, 00:10:08.920 "uuid": "f3f09d87-2f82-4a41-8eb9-8bd2a4eb5f97", 00:10:08.920 "assigned_rate_limits": { 00:10:08.920 "rw_ios_per_sec": 0, 00:10:08.920 "rw_mbytes_per_sec": 0, 00:10:08.920 "r_mbytes_per_sec": 0, 00:10:08.920 "w_mbytes_per_sec": 0 00:10:08.920 }, 00:10:08.920 "claimed": false, 00:10:08.920 "zoned": false, 00:10:08.920 "supported_io_types": { 00:10:08.920 "read": true, 00:10:08.920 "write": true, 00:10:08.920 "unmap": true, 00:10:08.920 "flush": true, 00:10:08.920 "reset": true, 00:10:08.920 "nvme_admin": false, 00:10:08.920 "nvme_io": false, 00:10:08.920 "nvme_io_md": false, 00:10:08.920 "write_zeroes": true, 00:10:08.920 "zcopy": true, 00:10:08.920 "get_zone_info": false, 00:10:08.920 "zone_management": false, 00:10:08.920 "zone_append": false, 00:10:08.920 "compare": false, 00:10:08.920 "compare_and_write": false, 00:10:08.920 "abort": true, 00:10:08.920 "seek_hole": false, 00:10:08.920 "seek_data": false, 00:10:08.920 "copy": true, 00:10:08.920 "nvme_iov_md": false 00:10:08.920 }, 00:10:08.920 "memory_domains": [ 00:10:08.920 { 00:10:08.920 "dma_device_id": "system", 00:10:08.920 "dma_device_type": 1 00:10:08.920 }, 00:10:08.920 { 00:10:08.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.920 "dma_device_type": 2 00:10:08.920 } 00:10:08.920 ], 00:10:08.920 "driver_specific": {} 00:10:08.920 } 00:10:08.920 ] 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.920 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.179 BaseBdev3 00:10:09.179 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.179 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:09.179 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:09.179 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:09.179 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:09.179 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.180 [ 00:10:09.180 { 00:10:09.180 "name": "BaseBdev3", 00:10:09.180 "aliases": [ 00:10:09.180 "fa8606b6-fc9c-4af6-8073-7b450b8d25c2" 00:10:09.180 ], 00:10:09.180 "product_name": "Malloc disk", 00:10:09.180 "block_size": 512, 00:10:09.180 "num_blocks": 65536, 00:10:09.180 "uuid": "fa8606b6-fc9c-4af6-8073-7b450b8d25c2", 00:10:09.180 "assigned_rate_limits": { 00:10:09.180 "rw_ios_per_sec": 0, 00:10:09.180 "rw_mbytes_per_sec": 0, 00:10:09.180 "r_mbytes_per_sec": 0, 00:10:09.180 "w_mbytes_per_sec": 0 00:10:09.180 }, 00:10:09.180 "claimed": false, 00:10:09.180 "zoned": false, 00:10:09.180 "supported_io_types": { 00:10:09.180 "read": true, 00:10:09.180 "write": true, 00:10:09.180 "unmap": true, 00:10:09.180 "flush": true, 00:10:09.180 "reset": true, 00:10:09.180 "nvme_admin": false, 00:10:09.180 "nvme_io": false, 00:10:09.180 "nvme_io_md": false, 00:10:09.180 "write_zeroes": true, 00:10:09.180 "zcopy": true, 00:10:09.180 "get_zone_info": false, 00:10:09.180 "zone_management": false, 00:10:09.180 "zone_append": false, 00:10:09.180 "compare": false, 00:10:09.180 "compare_and_write": false, 00:10:09.180 "abort": true, 00:10:09.180 "seek_hole": false, 00:10:09.180 "seek_data": false, 00:10:09.180 "copy": true, 00:10:09.180 "nvme_iov_md": false 00:10:09.180 }, 00:10:09.180 "memory_domains": [ 00:10:09.180 { 00:10:09.180 "dma_device_id": "system", 00:10:09.180 "dma_device_type": 1 00:10:09.180 }, 00:10:09.180 { 00:10:09.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.180 "dma_device_type": 2 00:10:09.180 } 00:10:09.180 ], 00:10:09.180 "driver_specific": {} 00:10:09.180 } 00:10:09.180 ] 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.180 [2024-11-15 09:28:57.468182] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:09.180 [2024-11-15 09:28:57.468366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:09.180 [2024-11-15 09:28:57.468429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.180 [2024-11-15 09:28:57.470573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.180 "name": "Existed_Raid", 00:10:09.180 "uuid": "628261fb-e313-489a-be59-17677102c8b9", 00:10:09.180 "strip_size_kb": 0, 00:10:09.180 "state": "configuring", 00:10:09.180 "raid_level": "raid1", 00:10:09.180 "superblock": true, 00:10:09.180 "num_base_bdevs": 3, 00:10:09.180 "num_base_bdevs_discovered": 2, 00:10:09.180 "num_base_bdevs_operational": 3, 00:10:09.180 "base_bdevs_list": [ 00:10:09.180 { 00:10:09.180 "name": "BaseBdev1", 00:10:09.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.180 "is_configured": false, 00:10:09.180 "data_offset": 0, 00:10:09.180 "data_size": 0 00:10:09.180 }, 00:10:09.180 { 00:10:09.180 "name": "BaseBdev2", 00:10:09.180 "uuid": "f3f09d87-2f82-4a41-8eb9-8bd2a4eb5f97", 00:10:09.180 "is_configured": true, 00:10:09.180 "data_offset": 2048, 00:10:09.180 "data_size": 63488 00:10:09.180 }, 00:10:09.180 { 00:10:09.180 "name": "BaseBdev3", 00:10:09.180 "uuid": "fa8606b6-fc9c-4af6-8073-7b450b8d25c2", 00:10:09.180 "is_configured": true, 00:10:09.180 "data_offset": 2048, 00:10:09.180 "data_size": 63488 00:10:09.180 } 00:10:09.180 ] 00:10:09.180 }' 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.180 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.747 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:09.747 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.747 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.747 [2024-11-15 09:28:57.967337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:09.747 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.747 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:09.747 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.747 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.747 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.747 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.747 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.747 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.747 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.747 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.747 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.747 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.747 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.748 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.748 09:28:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.748 09:28:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.748 09:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.748 "name": "Existed_Raid", 00:10:09.748 "uuid": "628261fb-e313-489a-be59-17677102c8b9", 00:10:09.748 "strip_size_kb": 0, 00:10:09.748 "state": "configuring", 00:10:09.748 "raid_level": "raid1", 00:10:09.748 "superblock": true, 00:10:09.748 "num_base_bdevs": 3, 00:10:09.748 "num_base_bdevs_discovered": 1, 00:10:09.748 "num_base_bdevs_operational": 3, 00:10:09.748 "base_bdevs_list": [ 00:10:09.748 { 00:10:09.748 "name": "BaseBdev1", 00:10:09.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.748 "is_configured": false, 00:10:09.748 "data_offset": 0, 00:10:09.748 "data_size": 0 00:10:09.748 }, 00:10:09.748 { 00:10:09.748 "name": null, 00:10:09.748 "uuid": "f3f09d87-2f82-4a41-8eb9-8bd2a4eb5f97", 00:10:09.748 "is_configured": false, 00:10:09.748 "data_offset": 0, 00:10:09.748 "data_size": 63488 00:10:09.748 }, 00:10:09.748 { 00:10:09.748 "name": "BaseBdev3", 00:10:09.748 "uuid": "fa8606b6-fc9c-4af6-8073-7b450b8d25c2", 00:10:09.748 "is_configured": true, 00:10:09.748 "data_offset": 2048, 00:10:09.748 "data_size": 63488 00:10:09.748 } 00:10:09.748 ] 00:10:09.748 }' 00:10:09.748 09:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.748 09:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.007 09:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.007 09:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:10.007 09:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.007 09:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.007 09:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.266 [2024-11-15 09:28:58.538310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.266 BaseBdev1 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.266 [ 00:10:10.266 { 00:10:10.266 "name": "BaseBdev1", 00:10:10.266 "aliases": [ 00:10:10.266 "c6fbc479-446a-4fb7-aa7a-74eeaa0f8aa5" 00:10:10.266 ], 00:10:10.266 "product_name": "Malloc disk", 00:10:10.266 "block_size": 512, 00:10:10.266 "num_blocks": 65536, 00:10:10.266 "uuid": "c6fbc479-446a-4fb7-aa7a-74eeaa0f8aa5", 00:10:10.266 "assigned_rate_limits": { 00:10:10.266 "rw_ios_per_sec": 0, 00:10:10.266 "rw_mbytes_per_sec": 0, 00:10:10.266 "r_mbytes_per_sec": 0, 00:10:10.266 "w_mbytes_per_sec": 0 00:10:10.266 }, 00:10:10.266 "claimed": true, 00:10:10.266 "claim_type": "exclusive_write", 00:10:10.266 "zoned": false, 00:10:10.266 "supported_io_types": { 00:10:10.266 "read": true, 00:10:10.266 "write": true, 00:10:10.266 "unmap": true, 00:10:10.266 "flush": true, 00:10:10.266 "reset": true, 00:10:10.266 "nvme_admin": false, 00:10:10.266 "nvme_io": false, 00:10:10.266 "nvme_io_md": false, 00:10:10.266 "write_zeroes": true, 00:10:10.266 "zcopy": true, 00:10:10.266 "get_zone_info": false, 00:10:10.266 "zone_management": false, 00:10:10.266 "zone_append": false, 00:10:10.266 "compare": false, 00:10:10.266 "compare_and_write": false, 00:10:10.266 "abort": true, 00:10:10.266 "seek_hole": false, 00:10:10.266 "seek_data": false, 00:10:10.266 "copy": true, 00:10:10.266 "nvme_iov_md": false 00:10:10.266 }, 00:10:10.266 "memory_domains": [ 00:10:10.266 { 00:10:10.266 "dma_device_id": "system", 00:10:10.266 "dma_device_type": 1 00:10:10.266 }, 00:10:10.266 { 00:10:10.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.266 "dma_device_type": 2 00:10:10.266 } 00:10:10.266 ], 00:10:10.266 "driver_specific": {} 00:10:10.266 } 00:10:10.266 ] 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.266 09:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.267 09:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.267 09:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.267 09:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.267 09:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.267 09:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.267 09:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.267 "name": "Existed_Raid", 00:10:10.267 "uuid": "628261fb-e313-489a-be59-17677102c8b9", 00:10:10.267 "strip_size_kb": 0, 00:10:10.267 "state": "configuring", 00:10:10.267 "raid_level": "raid1", 00:10:10.267 "superblock": true, 00:10:10.267 "num_base_bdevs": 3, 00:10:10.267 "num_base_bdevs_discovered": 2, 00:10:10.267 "num_base_bdevs_operational": 3, 00:10:10.267 "base_bdevs_list": [ 00:10:10.267 { 00:10:10.267 "name": "BaseBdev1", 00:10:10.267 "uuid": "c6fbc479-446a-4fb7-aa7a-74eeaa0f8aa5", 00:10:10.267 "is_configured": true, 00:10:10.267 "data_offset": 2048, 00:10:10.267 "data_size": 63488 00:10:10.267 }, 00:10:10.267 { 00:10:10.267 "name": null, 00:10:10.267 "uuid": "f3f09d87-2f82-4a41-8eb9-8bd2a4eb5f97", 00:10:10.267 "is_configured": false, 00:10:10.267 "data_offset": 0, 00:10:10.267 "data_size": 63488 00:10:10.267 }, 00:10:10.267 { 00:10:10.267 "name": "BaseBdev3", 00:10:10.267 "uuid": "fa8606b6-fc9c-4af6-8073-7b450b8d25c2", 00:10:10.267 "is_configured": true, 00:10:10.267 "data_offset": 2048, 00:10:10.267 "data_size": 63488 00:10:10.267 } 00:10:10.267 ] 00:10:10.267 }' 00:10:10.267 09:28:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.267 09:28:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.835 [2024-11-15 09:28:59.077496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.835 "name": "Existed_Raid", 00:10:10.835 "uuid": "628261fb-e313-489a-be59-17677102c8b9", 00:10:10.835 "strip_size_kb": 0, 00:10:10.835 "state": "configuring", 00:10:10.835 "raid_level": "raid1", 00:10:10.835 "superblock": true, 00:10:10.835 "num_base_bdevs": 3, 00:10:10.835 "num_base_bdevs_discovered": 1, 00:10:10.835 "num_base_bdevs_operational": 3, 00:10:10.835 "base_bdevs_list": [ 00:10:10.835 { 00:10:10.835 "name": "BaseBdev1", 00:10:10.835 "uuid": "c6fbc479-446a-4fb7-aa7a-74eeaa0f8aa5", 00:10:10.835 "is_configured": true, 00:10:10.835 "data_offset": 2048, 00:10:10.835 "data_size": 63488 00:10:10.835 }, 00:10:10.835 { 00:10:10.835 "name": null, 00:10:10.835 "uuid": "f3f09d87-2f82-4a41-8eb9-8bd2a4eb5f97", 00:10:10.835 "is_configured": false, 00:10:10.835 "data_offset": 0, 00:10:10.835 "data_size": 63488 00:10:10.835 }, 00:10:10.835 { 00:10:10.835 "name": null, 00:10:10.835 "uuid": "fa8606b6-fc9c-4af6-8073-7b450b8d25c2", 00:10:10.835 "is_configured": false, 00:10:10.835 "data_offset": 0, 00:10:10.835 "data_size": 63488 00:10:10.835 } 00:10:10.835 ] 00:10:10.835 }' 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.835 09:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.095 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.095 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:11.095 09:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.095 09:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.095 09:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.356 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:11.356 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:11.356 09:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.356 09:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.356 [2024-11-15 09:28:59.584680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:11.356 09:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.356 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:11.356 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.356 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.356 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.356 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.356 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.356 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.356 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.356 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.356 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.356 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.356 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.356 09:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.356 09:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.356 09:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.356 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.356 "name": "Existed_Raid", 00:10:11.356 "uuid": "628261fb-e313-489a-be59-17677102c8b9", 00:10:11.356 "strip_size_kb": 0, 00:10:11.356 "state": "configuring", 00:10:11.356 "raid_level": "raid1", 00:10:11.356 "superblock": true, 00:10:11.356 "num_base_bdevs": 3, 00:10:11.356 "num_base_bdevs_discovered": 2, 00:10:11.356 "num_base_bdevs_operational": 3, 00:10:11.356 "base_bdevs_list": [ 00:10:11.356 { 00:10:11.356 "name": "BaseBdev1", 00:10:11.356 "uuid": "c6fbc479-446a-4fb7-aa7a-74eeaa0f8aa5", 00:10:11.356 "is_configured": true, 00:10:11.356 "data_offset": 2048, 00:10:11.356 "data_size": 63488 00:10:11.356 }, 00:10:11.356 { 00:10:11.356 "name": null, 00:10:11.356 "uuid": "f3f09d87-2f82-4a41-8eb9-8bd2a4eb5f97", 00:10:11.356 "is_configured": false, 00:10:11.356 "data_offset": 0, 00:10:11.356 "data_size": 63488 00:10:11.356 }, 00:10:11.356 { 00:10:11.356 "name": "BaseBdev3", 00:10:11.356 "uuid": "fa8606b6-fc9c-4af6-8073-7b450b8d25c2", 00:10:11.356 "is_configured": true, 00:10:11.356 "data_offset": 2048, 00:10:11.356 "data_size": 63488 00:10:11.356 } 00:10:11.356 ] 00:10:11.356 }' 00:10:11.356 09:28:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.356 09:28:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.615 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:11.615 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.615 09:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.615 09:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.615 09:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.616 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:11.616 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:11.616 09:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.616 09:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.616 [2024-11-15 09:29:00.067926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:11.876 09:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.876 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:11.876 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.876 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.876 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.876 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.876 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.876 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.876 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.876 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.876 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.876 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.876 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.876 09:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.876 09:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.876 09:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.876 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.876 "name": "Existed_Raid", 00:10:11.876 "uuid": "628261fb-e313-489a-be59-17677102c8b9", 00:10:11.876 "strip_size_kb": 0, 00:10:11.876 "state": "configuring", 00:10:11.876 "raid_level": "raid1", 00:10:11.876 "superblock": true, 00:10:11.876 "num_base_bdevs": 3, 00:10:11.876 "num_base_bdevs_discovered": 1, 00:10:11.876 "num_base_bdevs_operational": 3, 00:10:11.876 "base_bdevs_list": [ 00:10:11.876 { 00:10:11.876 "name": null, 00:10:11.876 "uuid": "c6fbc479-446a-4fb7-aa7a-74eeaa0f8aa5", 00:10:11.876 "is_configured": false, 00:10:11.876 "data_offset": 0, 00:10:11.876 "data_size": 63488 00:10:11.876 }, 00:10:11.876 { 00:10:11.876 "name": null, 00:10:11.876 "uuid": "f3f09d87-2f82-4a41-8eb9-8bd2a4eb5f97", 00:10:11.876 "is_configured": false, 00:10:11.876 "data_offset": 0, 00:10:11.876 "data_size": 63488 00:10:11.876 }, 00:10:11.876 { 00:10:11.876 "name": "BaseBdev3", 00:10:11.876 "uuid": "fa8606b6-fc9c-4af6-8073-7b450b8d25c2", 00:10:11.876 "is_configured": true, 00:10:11.876 "data_offset": 2048, 00:10:11.876 "data_size": 63488 00:10:11.876 } 00:10:11.876 ] 00:10:11.876 }' 00:10:11.876 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.876 09:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.446 [2024-11-15 09:29:00.687543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.446 "name": "Existed_Raid", 00:10:12.446 "uuid": "628261fb-e313-489a-be59-17677102c8b9", 00:10:12.446 "strip_size_kb": 0, 00:10:12.446 "state": "configuring", 00:10:12.446 "raid_level": "raid1", 00:10:12.446 "superblock": true, 00:10:12.446 "num_base_bdevs": 3, 00:10:12.446 "num_base_bdevs_discovered": 2, 00:10:12.446 "num_base_bdevs_operational": 3, 00:10:12.446 "base_bdevs_list": [ 00:10:12.446 { 00:10:12.446 "name": null, 00:10:12.446 "uuid": "c6fbc479-446a-4fb7-aa7a-74eeaa0f8aa5", 00:10:12.446 "is_configured": false, 00:10:12.446 "data_offset": 0, 00:10:12.446 "data_size": 63488 00:10:12.446 }, 00:10:12.446 { 00:10:12.446 "name": "BaseBdev2", 00:10:12.446 "uuid": "f3f09d87-2f82-4a41-8eb9-8bd2a4eb5f97", 00:10:12.446 "is_configured": true, 00:10:12.446 "data_offset": 2048, 00:10:12.446 "data_size": 63488 00:10:12.446 }, 00:10:12.446 { 00:10:12.446 "name": "BaseBdev3", 00:10:12.446 "uuid": "fa8606b6-fc9c-4af6-8073-7b450b8d25c2", 00:10:12.446 "is_configured": true, 00:10:12.446 "data_offset": 2048, 00:10:12.446 "data_size": 63488 00:10:12.446 } 00:10:12.446 ] 00:10:12.446 }' 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.446 09:29:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.706 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:12.706 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.706 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.706 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.706 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.966 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:12.966 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.966 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:12.966 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.966 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.966 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.966 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c6fbc479-446a-4fb7-aa7a-74eeaa0f8aa5 00:10:12.966 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.966 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.966 [2024-11-15 09:29:01.270611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:12.966 [2024-11-15 09:29:01.270920] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:12.966 [2024-11-15 09:29:01.270933] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:12.966 [2024-11-15 09:29:01.271196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:12.966 [2024-11-15 09:29:01.271358] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:12.966 [2024-11-15 09:29:01.271371] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:12.966 [2024-11-15 09:29:01.271508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.966 NewBaseBdev 00:10:12.966 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.967 [ 00:10:12.967 { 00:10:12.967 "name": "NewBaseBdev", 00:10:12.967 "aliases": [ 00:10:12.967 "c6fbc479-446a-4fb7-aa7a-74eeaa0f8aa5" 00:10:12.967 ], 00:10:12.967 "product_name": "Malloc disk", 00:10:12.967 "block_size": 512, 00:10:12.967 "num_blocks": 65536, 00:10:12.967 "uuid": "c6fbc479-446a-4fb7-aa7a-74eeaa0f8aa5", 00:10:12.967 "assigned_rate_limits": { 00:10:12.967 "rw_ios_per_sec": 0, 00:10:12.967 "rw_mbytes_per_sec": 0, 00:10:12.967 "r_mbytes_per_sec": 0, 00:10:12.967 "w_mbytes_per_sec": 0 00:10:12.967 }, 00:10:12.967 "claimed": true, 00:10:12.967 "claim_type": "exclusive_write", 00:10:12.967 "zoned": false, 00:10:12.967 "supported_io_types": { 00:10:12.967 "read": true, 00:10:12.967 "write": true, 00:10:12.967 "unmap": true, 00:10:12.967 "flush": true, 00:10:12.967 "reset": true, 00:10:12.967 "nvme_admin": false, 00:10:12.967 "nvme_io": false, 00:10:12.967 "nvme_io_md": false, 00:10:12.967 "write_zeroes": true, 00:10:12.967 "zcopy": true, 00:10:12.967 "get_zone_info": false, 00:10:12.967 "zone_management": false, 00:10:12.967 "zone_append": false, 00:10:12.967 "compare": false, 00:10:12.967 "compare_and_write": false, 00:10:12.967 "abort": true, 00:10:12.967 "seek_hole": false, 00:10:12.967 "seek_data": false, 00:10:12.967 "copy": true, 00:10:12.967 "nvme_iov_md": false 00:10:12.967 }, 00:10:12.967 "memory_domains": [ 00:10:12.967 { 00:10:12.967 "dma_device_id": "system", 00:10:12.967 "dma_device_type": 1 00:10:12.967 }, 00:10:12.967 { 00:10:12.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.967 "dma_device_type": 2 00:10:12.967 } 00:10:12.967 ], 00:10:12.967 "driver_specific": {} 00:10:12.967 } 00:10:12.967 ] 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.967 "name": "Existed_Raid", 00:10:12.967 "uuid": "628261fb-e313-489a-be59-17677102c8b9", 00:10:12.967 "strip_size_kb": 0, 00:10:12.967 "state": "online", 00:10:12.967 "raid_level": "raid1", 00:10:12.967 "superblock": true, 00:10:12.967 "num_base_bdevs": 3, 00:10:12.967 "num_base_bdevs_discovered": 3, 00:10:12.967 "num_base_bdevs_operational": 3, 00:10:12.967 "base_bdevs_list": [ 00:10:12.967 { 00:10:12.967 "name": "NewBaseBdev", 00:10:12.967 "uuid": "c6fbc479-446a-4fb7-aa7a-74eeaa0f8aa5", 00:10:12.967 "is_configured": true, 00:10:12.967 "data_offset": 2048, 00:10:12.967 "data_size": 63488 00:10:12.967 }, 00:10:12.967 { 00:10:12.967 "name": "BaseBdev2", 00:10:12.967 "uuid": "f3f09d87-2f82-4a41-8eb9-8bd2a4eb5f97", 00:10:12.967 "is_configured": true, 00:10:12.967 "data_offset": 2048, 00:10:12.967 "data_size": 63488 00:10:12.967 }, 00:10:12.967 { 00:10:12.967 "name": "BaseBdev3", 00:10:12.967 "uuid": "fa8606b6-fc9c-4af6-8073-7b450b8d25c2", 00:10:12.967 "is_configured": true, 00:10:12.967 "data_offset": 2048, 00:10:12.967 "data_size": 63488 00:10:12.967 } 00:10:12.967 ] 00:10:12.967 }' 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.967 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.535 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:13.535 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:13.535 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:13.535 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:13.535 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:13.535 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:13.535 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:13.535 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:13.535 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.535 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.535 [2024-11-15 09:29:01.790134] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.535 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.535 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:13.535 "name": "Existed_Raid", 00:10:13.535 "aliases": [ 00:10:13.535 "628261fb-e313-489a-be59-17677102c8b9" 00:10:13.535 ], 00:10:13.535 "product_name": "Raid Volume", 00:10:13.535 "block_size": 512, 00:10:13.535 "num_blocks": 63488, 00:10:13.535 "uuid": "628261fb-e313-489a-be59-17677102c8b9", 00:10:13.535 "assigned_rate_limits": { 00:10:13.535 "rw_ios_per_sec": 0, 00:10:13.535 "rw_mbytes_per_sec": 0, 00:10:13.536 "r_mbytes_per_sec": 0, 00:10:13.536 "w_mbytes_per_sec": 0 00:10:13.536 }, 00:10:13.536 "claimed": false, 00:10:13.536 "zoned": false, 00:10:13.536 "supported_io_types": { 00:10:13.536 "read": true, 00:10:13.536 "write": true, 00:10:13.536 "unmap": false, 00:10:13.536 "flush": false, 00:10:13.536 "reset": true, 00:10:13.536 "nvme_admin": false, 00:10:13.536 "nvme_io": false, 00:10:13.536 "nvme_io_md": false, 00:10:13.536 "write_zeroes": true, 00:10:13.536 "zcopy": false, 00:10:13.536 "get_zone_info": false, 00:10:13.536 "zone_management": false, 00:10:13.536 "zone_append": false, 00:10:13.536 "compare": false, 00:10:13.536 "compare_and_write": false, 00:10:13.536 "abort": false, 00:10:13.536 "seek_hole": false, 00:10:13.536 "seek_data": false, 00:10:13.536 "copy": false, 00:10:13.536 "nvme_iov_md": false 00:10:13.536 }, 00:10:13.536 "memory_domains": [ 00:10:13.536 { 00:10:13.536 "dma_device_id": "system", 00:10:13.536 "dma_device_type": 1 00:10:13.536 }, 00:10:13.536 { 00:10:13.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.536 "dma_device_type": 2 00:10:13.536 }, 00:10:13.536 { 00:10:13.536 "dma_device_id": "system", 00:10:13.536 "dma_device_type": 1 00:10:13.536 }, 00:10:13.536 { 00:10:13.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.536 "dma_device_type": 2 00:10:13.536 }, 00:10:13.536 { 00:10:13.536 "dma_device_id": "system", 00:10:13.536 "dma_device_type": 1 00:10:13.536 }, 00:10:13.536 { 00:10:13.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.536 "dma_device_type": 2 00:10:13.536 } 00:10:13.536 ], 00:10:13.536 "driver_specific": { 00:10:13.536 "raid": { 00:10:13.536 "uuid": "628261fb-e313-489a-be59-17677102c8b9", 00:10:13.536 "strip_size_kb": 0, 00:10:13.536 "state": "online", 00:10:13.536 "raid_level": "raid1", 00:10:13.536 "superblock": true, 00:10:13.536 "num_base_bdevs": 3, 00:10:13.536 "num_base_bdevs_discovered": 3, 00:10:13.536 "num_base_bdevs_operational": 3, 00:10:13.536 "base_bdevs_list": [ 00:10:13.536 { 00:10:13.536 "name": "NewBaseBdev", 00:10:13.536 "uuid": "c6fbc479-446a-4fb7-aa7a-74eeaa0f8aa5", 00:10:13.536 "is_configured": true, 00:10:13.536 "data_offset": 2048, 00:10:13.536 "data_size": 63488 00:10:13.536 }, 00:10:13.536 { 00:10:13.536 "name": "BaseBdev2", 00:10:13.536 "uuid": "f3f09d87-2f82-4a41-8eb9-8bd2a4eb5f97", 00:10:13.536 "is_configured": true, 00:10:13.536 "data_offset": 2048, 00:10:13.536 "data_size": 63488 00:10:13.536 }, 00:10:13.536 { 00:10:13.536 "name": "BaseBdev3", 00:10:13.536 "uuid": "fa8606b6-fc9c-4af6-8073-7b450b8d25c2", 00:10:13.536 "is_configured": true, 00:10:13.536 "data_offset": 2048, 00:10:13.536 "data_size": 63488 00:10:13.536 } 00:10:13.536 ] 00:10:13.536 } 00:10:13.536 } 00:10:13.536 }' 00:10:13.536 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.536 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:13.536 BaseBdev2 00:10:13.536 BaseBdev3' 00:10:13.536 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.536 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:13.536 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.536 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:13.536 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.536 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.536 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.536 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.536 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.536 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.536 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.536 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:13.536 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.536 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.536 09:29:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.536 09:29:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.795 09:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.795 09:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.795 09:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.795 09:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:13.795 09:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.795 09:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.795 09:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.795 09:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.795 09:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.795 09:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.795 09:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:13.795 09:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.795 09:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.795 [2024-11-15 09:29:02.061348] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.795 [2024-11-15 09:29:02.061402] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:13.795 [2024-11-15 09:29:02.061488] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.795 [2024-11-15 09:29:02.061808] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.795 [2024-11-15 09:29:02.061821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:13.795 09:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.795 09:29:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68355 00:10:13.795 09:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 68355 ']' 00:10:13.795 09:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 68355 00:10:13.795 09:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:10:13.795 09:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:13.795 09:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68355 00:10:13.795 09:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:13.795 09:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:13.795 09:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68355' 00:10:13.795 killing process with pid 68355 00:10:13.795 09:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 68355 00:10:13.795 [2024-11-15 09:29:02.096367] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:13.795 09:29:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 68355 00:10:14.054 [2024-11-15 09:29:02.458762] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:15.429 09:29:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:15.430 00:10:15.430 real 0m11.397s 00:10:15.430 user 0m17.920s 00:10:15.430 sys 0m2.000s 00:10:15.430 ************************************ 00:10:15.430 END TEST raid_state_function_test_sb 00:10:15.430 ************************************ 00:10:15.430 09:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:15.430 09:29:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.430 09:29:03 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:15.430 09:29:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:15.430 09:29:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:15.430 09:29:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:15.430 ************************************ 00:10:15.430 START TEST raid_superblock_test 00:10:15.430 ************************************ 00:10:15.430 09:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 3 00:10:15.430 09:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:15.430 09:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:15.430 09:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:15.430 09:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:15.430 09:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:15.430 09:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:15.430 09:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:15.430 09:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:15.430 09:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:15.430 09:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:15.430 09:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:15.430 09:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:15.430 09:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:15.430 09:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:15.430 09:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:15.430 09:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68981 00:10:15.430 09:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:15.430 09:29:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68981 00:10:15.430 09:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 68981 ']' 00:10:15.430 09:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.430 09:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:15.430 09:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.430 09:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:15.430 09:29:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.690 [2024-11-15 09:29:03.932341] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:10:15.690 [2024-11-15 09:29:03.932507] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68981 ] 00:10:15.690 [2024-11-15 09:29:04.104382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.950 [2024-11-15 09:29:04.231767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.209 [2024-11-15 09:29:04.455901] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.209 [2024-11-15 09:29:04.455971] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.467 malloc1 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.467 [2024-11-15 09:29:04.870174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:16.467 [2024-11-15 09:29:04.870252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.467 [2024-11-15 09:29:04.870276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:16.467 [2024-11-15 09:29:04.870289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.467 [2024-11-15 09:29:04.872587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.467 [2024-11-15 09:29:04.872632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:16.467 pt1 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.467 malloc2 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.467 09:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.467 [2024-11-15 09:29:04.929624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:16.467 [2024-11-15 09:29:04.929699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.467 [2024-11-15 09:29:04.929726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:16.467 [2024-11-15 09:29:04.929743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.467 [2024-11-15 09:29:04.932217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.467 [2024-11-15 09:29:04.932260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:16.726 pt2 00:10:16.726 09:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.726 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.726 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.726 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:16.726 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:16.726 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:16.726 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.726 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.726 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.726 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:16.726 09:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.726 09:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.726 malloc3 00:10:16.726 09:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.726 09:29:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:16.726 09:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.726 09:29:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.726 [2024-11-15 09:29:05.000641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:16.726 [2024-11-15 09:29:05.000731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.726 [2024-11-15 09:29:05.000756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:16.726 [2024-11-15 09:29:05.000767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.726 [2024-11-15 09:29:05.003036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.726 [2024-11-15 09:29:05.003073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:16.726 pt3 00:10:16.726 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.726 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.726 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.726 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:16.726 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.726 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.726 [2024-11-15 09:29:05.012665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:16.726 [2024-11-15 09:29:05.014790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:16.726 [2024-11-15 09:29:05.014881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:16.726 [2024-11-15 09:29:05.015067] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:16.726 [2024-11-15 09:29:05.015093] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:16.726 [2024-11-15 09:29:05.015358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:16.726 [2024-11-15 09:29:05.015558] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:16.727 [2024-11-15 09:29:05.015579] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:16.727 [2024-11-15 09:29:05.015756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.727 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.727 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:16.727 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.727 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.727 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.727 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.727 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.727 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.727 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.727 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.727 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.727 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.727 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.727 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.727 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.727 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.727 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.727 "name": "raid_bdev1", 00:10:16.727 "uuid": "43927132-43ee-48b0-a389-b91422a2d942", 00:10:16.727 "strip_size_kb": 0, 00:10:16.727 "state": "online", 00:10:16.727 "raid_level": "raid1", 00:10:16.727 "superblock": true, 00:10:16.727 "num_base_bdevs": 3, 00:10:16.727 "num_base_bdevs_discovered": 3, 00:10:16.727 "num_base_bdevs_operational": 3, 00:10:16.727 "base_bdevs_list": [ 00:10:16.727 { 00:10:16.727 "name": "pt1", 00:10:16.727 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:16.727 "is_configured": true, 00:10:16.727 "data_offset": 2048, 00:10:16.727 "data_size": 63488 00:10:16.727 }, 00:10:16.727 { 00:10:16.727 "name": "pt2", 00:10:16.727 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.727 "is_configured": true, 00:10:16.727 "data_offset": 2048, 00:10:16.727 "data_size": 63488 00:10:16.727 }, 00:10:16.727 { 00:10:16.727 "name": "pt3", 00:10:16.727 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.727 "is_configured": true, 00:10:16.727 "data_offset": 2048, 00:10:16.727 "data_size": 63488 00:10:16.727 } 00:10:16.727 ] 00:10:16.727 }' 00:10:16.727 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.727 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.985 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:16.986 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:16.986 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:16.986 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:16.986 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:16.986 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:16.986 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:16.986 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:16.986 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.986 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.245 [2024-11-15 09:29:05.452358] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.245 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.245 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.245 "name": "raid_bdev1", 00:10:17.245 "aliases": [ 00:10:17.245 "43927132-43ee-48b0-a389-b91422a2d942" 00:10:17.245 ], 00:10:17.245 "product_name": "Raid Volume", 00:10:17.245 "block_size": 512, 00:10:17.245 "num_blocks": 63488, 00:10:17.245 "uuid": "43927132-43ee-48b0-a389-b91422a2d942", 00:10:17.245 "assigned_rate_limits": { 00:10:17.245 "rw_ios_per_sec": 0, 00:10:17.245 "rw_mbytes_per_sec": 0, 00:10:17.245 "r_mbytes_per_sec": 0, 00:10:17.245 "w_mbytes_per_sec": 0 00:10:17.245 }, 00:10:17.245 "claimed": false, 00:10:17.245 "zoned": false, 00:10:17.245 "supported_io_types": { 00:10:17.245 "read": true, 00:10:17.245 "write": true, 00:10:17.245 "unmap": false, 00:10:17.245 "flush": false, 00:10:17.245 "reset": true, 00:10:17.245 "nvme_admin": false, 00:10:17.245 "nvme_io": false, 00:10:17.245 "nvme_io_md": false, 00:10:17.245 "write_zeroes": true, 00:10:17.245 "zcopy": false, 00:10:17.245 "get_zone_info": false, 00:10:17.245 "zone_management": false, 00:10:17.246 "zone_append": false, 00:10:17.246 "compare": false, 00:10:17.246 "compare_and_write": false, 00:10:17.246 "abort": false, 00:10:17.246 "seek_hole": false, 00:10:17.246 "seek_data": false, 00:10:17.246 "copy": false, 00:10:17.246 "nvme_iov_md": false 00:10:17.246 }, 00:10:17.246 "memory_domains": [ 00:10:17.246 { 00:10:17.246 "dma_device_id": "system", 00:10:17.246 "dma_device_type": 1 00:10:17.246 }, 00:10:17.246 { 00:10:17.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.246 "dma_device_type": 2 00:10:17.246 }, 00:10:17.246 { 00:10:17.246 "dma_device_id": "system", 00:10:17.246 "dma_device_type": 1 00:10:17.246 }, 00:10:17.246 { 00:10:17.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.246 "dma_device_type": 2 00:10:17.246 }, 00:10:17.246 { 00:10:17.246 "dma_device_id": "system", 00:10:17.246 "dma_device_type": 1 00:10:17.246 }, 00:10:17.246 { 00:10:17.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.246 "dma_device_type": 2 00:10:17.246 } 00:10:17.246 ], 00:10:17.246 "driver_specific": { 00:10:17.246 "raid": { 00:10:17.246 "uuid": "43927132-43ee-48b0-a389-b91422a2d942", 00:10:17.246 "strip_size_kb": 0, 00:10:17.246 "state": "online", 00:10:17.246 "raid_level": "raid1", 00:10:17.246 "superblock": true, 00:10:17.246 "num_base_bdevs": 3, 00:10:17.246 "num_base_bdevs_discovered": 3, 00:10:17.246 "num_base_bdevs_operational": 3, 00:10:17.246 "base_bdevs_list": [ 00:10:17.246 { 00:10:17.246 "name": "pt1", 00:10:17.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.246 "is_configured": true, 00:10:17.246 "data_offset": 2048, 00:10:17.246 "data_size": 63488 00:10:17.246 }, 00:10:17.246 { 00:10:17.246 "name": "pt2", 00:10:17.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.246 "is_configured": true, 00:10:17.246 "data_offset": 2048, 00:10:17.246 "data_size": 63488 00:10:17.246 }, 00:10:17.246 { 00:10:17.246 "name": "pt3", 00:10:17.246 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.246 "is_configured": true, 00:10:17.246 "data_offset": 2048, 00:10:17.246 "data_size": 63488 00:10:17.246 } 00:10:17.246 ] 00:10:17.246 } 00:10:17.246 } 00:10:17.246 }' 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:17.246 pt2 00:10:17.246 pt3' 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.246 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.505 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.505 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.505 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:17.505 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:17.505 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.506 [2024-11-15 09:29:05.735816] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=43927132-43ee-48b0-a389-b91422a2d942 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 43927132-43ee-48b0-a389-b91422a2d942 ']' 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.506 [2024-11-15 09:29:05.783469] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.506 [2024-11-15 09:29:05.783534] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.506 [2024-11-15 09:29:05.783633] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.506 [2024-11-15 09:29:05.783715] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.506 [2024-11-15 09:29:05.783732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.506 [2024-11-15 09:29:05.907251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:17.506 [2024-11-15 09:29:05.909122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:17.506 [2024-11-15 09:29:05.909172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:17.506 [2024-11-15 09:29:05.909220] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:17.506 [2024-11-15 09:29:05.909286] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:17.506 [2024-11-15 09:29:05.909307] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:17.506 [2024-11-15 09:29:05.909323] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.506 [2024-11-15 09:29:05.909332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:17.506 request: 00:10:17.506 { 00:10:17.506 "name": "raid_bdev1", 00:10:17.506 "raid_level": "raid1", 00:10:17.506 "base_bdevs": [ 00:10:17.506 "malloc1", 00:10:17.506 "malloc2", 00:10:17.506 "malloc3" 00:10:17.506 ], 00:10:17.506 "superblock": false, 00:10:17.506 "method": "bdev_raid_create", 00:10:17.506 "req_id": 1 00:10:17.506 } 00:10:17.506 Got JSON-RPC error response 00:10:17.506 response: 00:10:17.506 { 00:10:17.506 "code": -17, 00:10:17.506 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:17.506 } 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.506 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.766 [2024-11-15 09:29:05.975134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:17.766 [2024-11-15 09:29:05.975218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.766 [2024-11-15 09:29:05.975244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:17.766 [2024-11-15 09:29:05.975262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.766 [2024-11-15 09:29:05.977656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.766 [2024-11-15 09:29:05.977712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:17.766 [2024-11-15 09:29:05.977803] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:17.766 [2024-11-15 09:29:05.977874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:17.766 pt1 00:10:17.767 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.767 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:17.767 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.767 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.767 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.767 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.767 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.767 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.767 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.767 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.767 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.767 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.767 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.767 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.767 09:29:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.767 09:29:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.767 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.767 "name": "raid_bdev1", 00:10:17.767 "uuid": "43927132-43ee-48b0-a389-b91422a2d942", 00:10:17.767 "strip_size_kb": 0, 00:10:17.767 "state": "configuring", 00:10:17.767 "raid_level": "raid1", 00:10:17.767 "superblock": true, 00:10:17.767 "num_base_bdevs": 3, 00:10:17.767 "num_base_bdevs_discovered": 1, 00:10:17.767 "num_base_bdevs_operational": 3, 00:10:17.767 "base_bdevs_list": [ 00:10:17.767 { 00:10:17.767 "name": "pt1", 00:10:17.767 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.767 "is_configured": true, 00:10:17.767 "data_offset": 2048, 00:10:17.767 "data_size": 63488 00:10:17.767 }, 00:10:17.767 { 00:10:17.767 "name": null, 00:10:17.767 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.767 "is_configured": false, 00:10:17.767 "data_offset": 2048, 00:10:17.767 "data_size": 63488 00:10:17.767 }, 00:10:17.767 { 00:10:17.767 "name": null, 00:10:17.767 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.767 "is_configured": false, 00:10:17.767 "data_offset": 2048, 00:10:17.767 "data_size": 63488 00:10:17.767 } 00:10:17.767 ] 00:10:17.767 }' 00:10:17.767 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.767 09:29:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.027 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:18.027 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:18.027 09:29:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.027 09:29:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.027 [2024-11-15 09:29:06.446451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:18.027 [2024-11-15 09:29:06.446543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.027 [2024-11-15 09:29:06.446574] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:18.027 [2024-11-15 09:29:06.446585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.027 [2024-11-15 09:29:06.447075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.027 [2024-11-15 09:29:06.447102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:18.027 [2024-11-15 09:29:06.447195] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:18.027 [2024-11-15 09:29:06.447225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:18.027 pt2 00:10:18.027 09:29:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.027 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:18.027 09:29:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.027 09:29:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.027 [2024-11-15 09:29:06.454425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:18.027 09:29:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.027 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:18.027 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.027 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.027 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.027 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.027 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.027 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.027 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.027 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.027 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.027 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.027 09:29:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.027 09:29:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.027 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.027 09:29:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.286 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.286 "name": "raid_bdev1", 00:10:18.286 "uuid": "43927132-43ee-48b0-a389-b91422a2d942", 00:10:18.286 "strip_size_kb": 0, 00:10:18.286 "state": "configuring", 00:10:18.286 "raid_level": "raid1", 00:10:18.286 "superblock": true, 00:10:18.286 "num_base_bdevs": 3, 00:10:18.286 "num_base_bdevs_discovered": 1, 00:10:18.286 "num_base_bdevs_operational": 3, 00:10:18.286 "base_bdevs_list": [ 00:10:18.286 { 00:10:18.286 "name": "pt1", 00:10:18.286 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.286 "is_configured": true, 00:10:18.286 "data_offset": 2048, 00:10:18.286 "data_size": 63488 00:10:18.286 }, 00:10:18.286 { 00:10:18.286 "name": null, 00:10:18.286 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.286 "is_configured": false, 00:10:18.286 "data_offset": 0, 00:10:18.286 "data_size": 63488 00:10:18.286 }, 00:10:18.286 { 00:10:18.286 "name": null, 00:10:18.286 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.286 "is_configured": false, 00:10:18.286 "data_offset": 2048, 00:10:18.286 "data_size": 63488 00:10:18.286 } 00:10:18.286 ] 00:10:18.286 }' 00:10:18.286 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.286 09:29:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.546 [2024-11-15 09:29:06.937595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:18.546 [2024-11-15 09:29:06.937686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.546 [2024-11-15 09:29:06.937711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:18.546 [2024-11-15 09:29:06.937725] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.546 [2024-11-15 09:29:06.938327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.546 [2024-11-15 09:29:06.938367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:18.546 [2024-11-15 09:29:06.938467] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:18.546 [2024-11-15 09:29:06.938523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:18.546 pt2 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.546 [2024-11-15 09:29:06.949533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:18.546 [2024-11-15 09:29:06.949590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.546 [2024-11-15 09:29:06.949613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:18.546 [2024-11-15 09:29:06.949627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.546 [2024-11-15 09:29:06.950048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.546 [2024-11-15 09:29:06.950079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:18.546 [2024-11-15 09:29:06.950148] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:18.546 [2024-11-15 09:29:06.950177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:18.546 [2024-11-15 09:29:06.950298] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:18.546 [2024-11-15 09:29:06.950319] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:18.546 [2024-11-15 09:29:06.950560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:18.546 [2024-11-15 09:29:06.950721] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:18.546 [2024-11-15 09:29:06.950736] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:18.546 [2024-11-15 09:29:06.950935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.546 pt3 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.546 09:29:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.546 "name": "raid_bdev1", 00:10:18.546 "uuid": "43927132-43ee-48b0-a389-b91422a2d942", 00:10:18.546 "strip_size_kb": 0, 00:10:18.546 "state": "online", 00:10:18.546 "raid_level": "raid1", 00:10:18.546 "superblock": true, 00:10:18.546 "num_base_bdevs": 3, 00:10:18.546 "num_base_bdevs_discovered": 3, 00:10:18.546 "num_base_bdevs_operational": 3, 00:10:18.546 "base_bdevs_list": [ 00:10:18.547 { 00:10:18.547 "name": "pt1", 00:10:18.547 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.547 "is_configured": true, 00:10:18.547 "data_offset": 2048, 00:10:18.547 "data_size": 63488 00:10:18.547 }, 00:10:18.547 { 00:10:18.547 "name": "pt2", 00:10:18.547 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.547 "is_configured": true, 00:10:18.547 "data_offset": 2048, 00:10:18.547 "data_size": 63488 00:10:18.547 }, 00:10:18.547 { 00:10:18.547 "name": "pt3", 00:10:18.547 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.547 "is_configured": true, 00:10:18.547 "data_offset": 2048, 00:10:18.547 "data_size": 63488 00:10:18.547 } 00:10:18.547 ] 00:10:18.547 }' 00:10:18.547 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.547 09:29:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.117 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:19.117 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:19.117 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:19.117 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:19.117 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:19.117 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:19.117 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:19.117 09:29:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.117 09:29:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.117 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:19.117 [2024-11-15 09:29:07.449110] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.117 09:29:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.117 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:19.117 "name": "raid_bdev1", 00:10:19.117 "aliases": [ 00:10:19.117 "43927132-43ee-48b0-a389-b91422a2d942" 00:10:19.117 ], 00:10:19.117 "product_name": "Raid Volume", 00:10:19.117 "block_size": 512, 00:10:19.117 "num_blocks": 63488, 00:10:19.117 "uuid": "43927132-43ee-48b0-a389-b91422a2d942", 00:10:19.117 "assigned_rate_limits": { 00:10:19.117 "rw_ios_per_sec": 0, 00:10:19.117 "rw_mbytes_per_sec": 0, 00:10:19.117 "r_mbytes_per_sec": 0, 00:10:19.117 "w_mbytes_per_sec": 0 00:10:19.117 }, 00:10:19.117 "claimed": false, 00:10:19.117 "zoned": false, 00:10:19.117 "supported_io_types": { 00:10:19.117 "read": true, 00:10:19.117 "write": true, 00:10:19.117 "unmap": false, 00:10:19.117 "flush": false, 00:10:19.117 "reset": true, 00:10:19.117 "nvme_admin": false, 00:10:19.117 "nvme_io": false, 00:10:19.117 "nvme_io_md": false, 00:10:19.117 "write_zeroes": true, 00:10:19.117 "zcopy": false, 00:10:19.117 "get_zone_info": false, 00:10:19.117 "zone_management": false, 00:10:19.117 "zone_append": false, 00:10:19.117 "compare": false, 00:10:19.117 "compare_and_write": false, 00:10:19.117 "abort": false, 00:10:19.117 "seek_hole": false, 00:10:19.117 "seek_data": false, 00:10:19.117 "copy": false, 00:10:19.117 "nvme_iov_md": false 00:10:19.117 }, 00:10:19.117 "memory_domains": [ 00:10:19.117 { 00:10:19.117 "dma_device_id": "system", 00:10:19.117 "dma_device_type": 1 00:10:19.117 }, 00:10:19.117 { 00:10:19.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.117 "dma_device_type": 2 00:10:19.117 }, 00:10:19.117 { 00:10:19.117 "dma_device_id": "system", 00:10:19.117 "dma_device_type": 1 00:10:19.117 }, 00:10:19.117 { 00:10:19.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.117 "dma_device_type": 2 00:10:19.117 }, 00:10:19.117 { 00:10:19.117 "dma_device_id": "system", 00:10:19.117 "dma_device_type": 1 00:10:19.117 }, 00:10:19.117 { 00:10:19.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.117 "dma_device_type": 2 00:10:19.117 } 00:10:19.117 ], 00:10:19.117 "driver_specific": { 00:10:19.117 "raid": { 00:10:19.117 "uuid": "43927132-43ee-48b0-a389-b91422a2d942", 00:10:19.117 "strip_size_kb": 0, 00:10:19.117 "state": "online", 00:10:19.117 "raid_level": "raid1", 00:10:19.117 "superblock": true, 00:10:19.117 "num_base_bdevs": 3, 00:10:19.117 "num_base_bdevs_discovered": 3, 00:10:19.117 "num_base_bdevs_operational": 3, 00:10:19.117 "base_bdevs_list": [ 00:10:19.117 { 00:10:19.117 "name": "pt1", 00:10:19.117 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:19.117 "is_configured": true, 00:10:19.117 "data_offset": 2048, 00:10:19.117 "data_size": 63488 00:10:19.117 }, 00:10:19.117 { 00:10:19.117 "name": "pt2", 00:10:19.118 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.118 "is_configured": true, 00:10:19.118 "data_offset": 2048, 00:10:19.118 "data_size": 63488 00:10:19.118 }, 00:10:19.118 { 00:10:19.118 "name": "pt3", 00:10:19.118 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.118 "is_configured": true, 00:10:19.118 "data_offset": 2048, 00:10:19.118 "data_size": 63488 00:10:19.118 } 00:10:19.118 ] 00:10:19.118 } 00:10:19.118 } 00:10:19.118 }' 00:10:19.118 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:19.118 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:19.118 pt2 00:10:19.118 pt3' 00:10:19.118 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.118 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:19.118 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.118 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:19.118 09:29:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.118 09:29:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.118 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.118 09:29:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.378 [2024-11-15 09:29:07.732578] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 43927132-43ee-48b0-a389-b91422a2d942 '!=' 43927132-43ee-48b0-a389-b91422a2d942 ']' 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.378 [2024-11-15 09:29:07.780283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.378 "name": "raid_bdev1", 00:10:19.378 "uuid": "43927132-43ee-48b0-a389-b91422a2d942", 00:10:19.378 "strip_size_kb": 0, 00:10:19.378 "state": "online", 00:10:19.378 "raid_level": "raid1", 00:10:19.378 "superblock": true, 00:10:19.378 "num_base_bdevs": 3, 00:10:19.378 "num_base_bdevs_discovered": 2, 00:10:19.378 "num_base_bdevs_operational": 2, 00:10:19.378 "base_bdevs_list": [ 00:10:19.378 { 00:10:19.378 "name": null, 00:10:19.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.378 "is_configured": false, 00:10:19.378 "data_offset": 0, 00:10:19.378 "data_size": 63488 00:10:19.378 }, 00:10:19.378 { 00:10:19.378 "name": "pt2", 00:10:19.378 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.378 "is_configured": true, 00:10:19.378 "data_offset": 2048, 00:10:19.378 "data_size": 63488 00:10:19.378 }, 00:10:19.378 { 00:10:19.378 "name": "pt3", 00:10:19.378 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.378 "is_configured": true, 00:10:19.378 "data_offset": 2048, 00:10:19.378 "data_size": 63488 00:10:19.378 } 00:10:19.378 ] 00:10:19.378 }' 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.378 09:29:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.946 [2024-11-15 09:29:08.219532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:19.946 [2024-11-15 09:29:08.219585] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.946 [2024-11-15 09:29:08.219681] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.946 [2024-11-15 09:29:08.219754] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.946 [2024-11-15 09:29:08.219771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.946 [2024-11-15 09:29:08.307353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:19.946 [2024-11-15 09:29:08.307476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.946 [2024-11-15 09:29:08.307516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:19.946 [2024-11-15 09:29:08.307561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.946 [2024-11-15 09:29:08.310209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.946 [2024-11-15 09:29:08.310295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:19.946 [2024-11-15 09:29:08.310398] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:19.946 [2024-11-15 09:29:08.310459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:19.946 pt2 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.946 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.947 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.947 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.947 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.947 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.947 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.947 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.947 "name": "raid_bdev1", 00:10:19.947 "uuid": "43927132-43ee-48b0-a389-b91422a2d942", 00:10:19.947 "strip_size_kb": 0, 00:10:19.947 "state": "configuring", 00:10:19.947 "raid_level": "raid1", 00:10:19.947 "superblock": true, 00:10:19.947 "num_base_bdevs": 3, 00:10:19.947 "num_base_bdevs_discovered": 1, 00:10:19.947 "num_base_bdevs_operational": 2, 00:10:19.947 "base_bdevs_list": [ 00:10:19.947 { 00:10:19.947 "name": null, 00:10:19.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.947 "is_configured": false, 00:10:19.947 "data_offset": 2048, 00:10:19.947 "data_size": 63488 00:10:19.947 }, 00:10:19.947 { 00:10:19.947 "name": "pt2", 00:10:19.947 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.947 "is_configured": true, 00:10:19.947 "data_offset": 2048, 00:10:19.947 "data_size": 63488 00:10:19.947 }, 00:10:19.947 { 00:10:19.947 "name": null, 00:10:19.947 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.947 "is_configured": false, 00:10:19.947 "data_offset": 2048, 00:10:19.947 "data_size": 63488 00:10:19.947 } 00:10:19.947 ] 00:10:19.947 }' 00:10:19.947 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.947 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.514 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:20.514 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:20.514 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:20.514 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:20.514 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.514 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.514 [2024-11-15 09:29:08.746660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:20.514 [2024-11-15 09:29:08.746757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.514 [2024-11-15 09:29:08.746782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:20.514 [2024-11-15 09:29:08.746796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.514 [2024-11-15 09:29:08.747350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.514 [2024-11-15 09:29:08.747393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:20.514 [2024-11-15 09:29:08.747503] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:20.514 [2024-11-15 09:29:08.747535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:20.514 [2024-11-15 09:29:08.747694] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:20.514 [2024-11-15 09:29:08.747714] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:20.514 [2024-11-15 09:29:08.748042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:20.514 [2024-11-15 09:29:08.748233] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:20.514 [2024-11-15 09:29:08.748244] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:20.514 [2024-11-15 09:29:08.748421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.514 pt3 00:10:20.514 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.514 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:20.514 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.514 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.514 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.514 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.514 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:20.514 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.514 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.514 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.514 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.514 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.514 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.514 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.514 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.514 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.514 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.514 "name": "raid_bdev1", 00:10:20.514 "uuid": "43927132-43ee-48b0-a389-b91422a2d942", 00:10:20.514 "strip_size_kb": 0, 00:10:20.514 "state": "online", 00:10:20.514 "raid_level": "raid1", 00:10:20.514 "superblock": true, 00:10:20.514 "num_base_bdevs": 3, 00:10:20.514 "num_base_bdevs_discovered": 2, 00:10:20.514 "num_base_bdevs_operational": 2, 00:10:20.514 "base_bdevs_list": [ 00:10:20.514 { 00:10:20.514 "name": null, 00:10:20.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.514 "is_configured": false, 00:10:20.514 "data_offset": 2048, 00:10:20.514 "data_size": 63488 00:10:20.514 }, 00:10:20.514 { 00:10:20.514 "name": "pt2", 00:10:20.514 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.514 "is_configured": true, 00:10:20.514 "data_offset": 2048, 00:10:20.514 "data_size": 63488 00:10:20.514 }, 00:10:20.514 { 00:10:20.514 "name": "pt3", 00:10:20.514 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.514 "is_configured": true, 00:10:20.514 "data_offset": 2048, 00:10:20.514 "data_size": 63488 00:10:20.514 } 00:10:20.514 ] 00:10:20.514 }' 00:10:20.514 09:29:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.514 09:29:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.774 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:20.774 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.774 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.774 [2024-11-15 09:29:09.201940] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:20.774 [2024-11-15 09:29:09.202095] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.774 [2024-11-15 09:29:09.202225] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.774 [2024-11-15 09:29:09.202331] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.774 [2024-11-15 09:29:09.202388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:20.774 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.774 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:20.774 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.774 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.774 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.774 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.033 [2024-11-15 09:29:09.273815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:21.033 [2024-11-15 09:29:09.273924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.033 [2024-11-15 09:29:09.273949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:21.033 [2024-11-15 09:29:09.273961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.033 [2024-11-15 09:29:09.276580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.033 [2024-11-15 09:29:09.276624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:21.033 [2024-11-15 09:29:09.276720] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:21.033 [2024-11-15 09:29:09.276782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:21.033 [2024-11-15 09:29:09.276970] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:21.033 [2024-11-15 09:29:09.276985] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.033 [2024-11-15 09:29:09.277005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:21.033 [2024-11-15 09:29:09.277074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:21.033 pt1 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.033 "name": "raid_bdev1", 00:10:21.033 "uuid": "43927132-43ee-48b0-a389-b91422a2d942", 00:10:21.033 "strip_size_kb": 0, 00:10:21.033 "state": "configuring", 00:10:21.033 "raid_level": "raid1", 00:10:21.033 "superblock": true, 00:10:21.033 "num_base_bdevs": 3, 00:10:21.033 "num_base_bdevs_discovered": 1, 00:10:21.033 "num_base_bdevs_operational": 2, 00:10:21.033 "base_bdevs_list": [ 00:10:21.033 { 00:10:21.033 "name": null, 00:10:21.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.033 "is_configured": false, 00:10:21.033 "data_offset": 2048, 00:10:21.033 "data_size": 63488 00:10:21.033 }, 00:10:21.033 { 00:10:21.033 "name": "pt2", 00:10:21.033 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.033 "is_configured": true, 00:10:21.033 "data_offset": 2048, 00:10:21.033 "data_size": 63488 00:10:21.033 }, 00:10:21.033 { 00:10:21.033 "name": null, 00:10:21.033 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.033 "is_configured": false, 00:10:21.033 "data_offset": 2048, 00:10:21.033 "data_size": 63488 00:10:21.033 } 00:10:21.033 ] 00:10:21.033 }' 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.033 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.292 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:21.292 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:21.292 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.292 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.292 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.292 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:21.292 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:21.292 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.292 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.292 [2024-11-15 09:29:09.741047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:21.292 [2024-11-15 09:29:09.741179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.292 [2024-11-15 09:29:09.741237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:21.292 [2024-11-15 09:29:09.741280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.292 [2024-11-15 09:29:09.741886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.292 [2024-11-15 09:29:09.741957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:21.292 [2024-11-15 09:29:09.742093] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:21.292 [2024-11-15 09:29:09.742181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:21.292 [2024-11-15 09:29:09.742369] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:21.292 [2024-11-15 09:29:09.742411] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:21.292 [2024-11-15 09:29:09.742709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:21.292 [2024-11-15 09:29:09.742977] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:21.292 [2024-11-15 09:29:09.743026] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:21.292 [2024-11-15 09:29:09.743235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.292 pt3 00:10:21.292 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.292 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:21.292 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.292 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.292 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.292 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.292 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:21.292 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.292 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.292 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.292 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.292 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.292 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.292 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.292 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.552 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.552 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.552 "name": "raid_bdev1", 00:10:21.552 "uuid": "43927132-43ee-48b0-a389-b91422a2d942", 00:10:21.552 "strip_size_kb": 0, 00:10:21.552 "state": "online", 00:10:21.552 "raid_level": "raid1", 00:10:21.552 "superblock": true, 00:10:21.552 "num_base_bdevs": 3, 00:10:21.552 "num_base_bdevs_discovered": 2, 00:10:21.552 "num_base_bdevs_operational": 2, 00:10:21.552 "base_bdevs_list": [ 00:10:21.552 { 00:10:21.552 "name": null, 00:10:21.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.552 "is_configured": false, 00:10:21.552 "data_offset": 2048, 00:10:21.552 "data_size": 63488 00:10:21.552 }, 00:10:21.552 { 00:10:21.552 "name": "pt2", 00:10:21.552 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.552 "is_configured": true, 00:10:21.552 "data_offset": 2048, 00:10:21.552 "data_size": 63488 00:10:21.552 }, 00:10:21.552 { 00:10:21.552 "name": "pt3", 00:10:21.552 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.552 "is_configured": true, 00:10:21.552 "data_offset": 2048, 00:10:21.552 "data_size": 63488 00:10:21.552 } 00:10:21.552 ] 00:10:21.552 }' 00:10:21.552 09:29:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.553 09:29:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.813 09:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:21.813 09:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:21.813 09:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.813 09:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.813 09:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.813 09:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:21.813 09:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:21.813 09:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:21.813 09:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.813 09:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.813 [2024-11-15 09:29:10.272505] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.073 09:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.073 09:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 43927132-43ee-48b0-a389-b91422a2d942 '!=' 43927132-43ee-48b0-a389-b91422a2d942 ']' 00:10:22.073 09:29:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68981 00:10:22.073 09:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 68981 ']' 00:10:22.073 09:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 68981 00:10:22.073 09:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:10:22.073 09:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:22.073 09:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68981 00:10:22.073 09:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:22.073 09:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:22.073 09:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68981' 00:10:22.073 killing process with pid 68981 00:10:22.073 09:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 68981 00:10:22.073 [2024-11-15 09:29:10.359233] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.073 [2024-11-15 09:29:10.359369] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.073 09:29:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 68981 00:10:22.073 [2024-11-15 09:29:10.359441] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.073 [2024-11-15 09:29:10.359456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:22.333 [2024-11-15 09:29:10.723389] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.714 09:29:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:23.714 00:10:23.714 real 0m8.209s 00:10:23.714 user 0m12.666s 00:10:23.714 sys 0m1.466s 00:10:23.714 09:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:23.714 09:29:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.714 ************************************ 00:10:23.714 END TEST raid_superblock_test 00:10:23.714 ************************************ 00:10:23.714 09:29:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:23.714 09:29:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:23.714 09:29:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:23.714 09:29:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.714 ************************************ 00:10:23.714 START TEST raid_read_error_test 00:10:23.714 ************************************ 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 read 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0L2qlEtP2L 00:10:23.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69432 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69432 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 69432 ']' 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:23.714 09:29:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.974 [2024-11-15 09:29:12.217129] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:10:23.974 [2024-11-15 09:29:12.217341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69432 ] 00:10:23.974 [2024-11-15 09:29:12.393687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.234 [2024-11-15 09:29:12.527472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.493 [2024-11-15 09:29:12.764717] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.493 [2024-11-15 09:29:12.764797] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.783 BaseBdev1_malloc 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.783 true 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.783 [2024-11-15 09:29:13.141236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:24.783 [2024-11-15 09:29:13.141301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.783 [2024-11-15 09:29:13.141326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:24.783 [2024-11-15 09:29:13.141340] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.783 [2024-11-15 09:29:13.143910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.783 [2024-11-15 09:29:13.143955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:24.783 BaseBdev1 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.783 BaseBdev2_malloc 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.783 true 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.783 [2024-11-15 09:29:13.213503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:24.783 [2024-11-15 09:29:13.213631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.783 [2024-11-15 09:29:13.213655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:24.783 [2024-11-15 09:29:13.213667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.783 [2024-11-15 09:29:13.216062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.783 [2024-11-15 09:29:13.216109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:24.783 BaseBdev2 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.783 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.068 BaseBdev3_malloc 00:10:25.068 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.068 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:25.068 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.068 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.068 true 00:10:25.068 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.068 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:25.068 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.069 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.069 [2024-11-15 09:29:13.289611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:25.069 [2024-11-15 09:29:13.289669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.069 [2024-11-15 09:29:13.289706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:25.069 [2024-11-15 09:29:13.289720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.069 [2024-11-15 09:29:13.292178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.069 [2024-11-15 09:29:13.292222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:25.069 BaseBdev3 00:10:25.069 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.069 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:25.069 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.069 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.069 [2024-11-15 09:29:13.301678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.069 [2024-11-15 09:29:13.303712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.069 [2024-11-15 09:29:13.303796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:25.069 [2024-11-15 09:29:13.304071] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:25.069 [2024-11-15 09:29:13.304087] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:25.069 [2024-11-15 09:29:13.304380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:25.069 [2024-11-15 09:29:13.304588] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:25.069 [2024-11-15 09:29:13.304603] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:25.069 [2024-11-15 09:29:13.304784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.069 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.069 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:25.069 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.069 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.069 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.069 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.069 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.069 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.069 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.069 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.069 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.069 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.069 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.069 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.069 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.069 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.069 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.069 "name": "raid_bdev1", 00:10:25.069 "uuid": "7245f638-437c-4395-bd3c-391b64d43165", 00:10:25.069 "strip_size_kb": 0, 00:10:25.069 "state": "online", 00:10:25.069 "raid_level": "raid1", 00:10:25.069 "superblock": true, 00:10:25.069 "num_base_bdevs": 3, 00:10:25.069 "num_base_bdevs_discovered": 3, 00:10:25.069 "num_base_bdevs_operational": 3, 00:10:25.069 "base_bdevs_list": [ 00:10:25.069 { 00:10:25.069 "name": "BaseBdev1", 00:10:25.069 "uuid": "e61a8e76-2824-5566-ac21-b3d50942b867", 00:10:25.069 "is_configured": true, 00:10:25.069 "data_offset": 2048, 00:10:25.069 "data_size": 63488 00:10:25.069 }, 00:10:25.069 { 00:10:25.069 "name": "BaseBdev2", 00:10:25.069 "uuid": "fe86d925-ad2d-5964-b254-eb1d78f74109", 00:10:25.069 "is_configured": true, 00:10:25.069 "data_offset": 2048, 00:10:25.069 "data_size": 63488 00:10:25.069 }, 00:10:25.069 { 00:10:25.069 "name": "BaseBdev3", 00:10:25.069 "uuid": "0e1e7a5b-5432-5b64-8b69-702ba74aea98", 00:10:25.069 "is_configured": true, 00:10:25.069 "data_offset": 2048, 00:10:25.069 "data_size": 63488 00:10:25.069 } 00:10:25.069 ] 00:10:25.069 }' 00:10:25.069 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.069 09:29:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.328 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:25.329 09:29:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:25.587 [2024-11-15 09:29:13.858208] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.523 "name": "raid_bdev1", 00:10:26.523 "uuid": "7245f638-437c-4395-bd3c-391b64d43165", 00:10:26.523 "strip_size_kb": 0, 00:10:26.523 "state": "online", 00:10:26.523 "raid_level": "raid1", 00:10:26.523 "superblock": true, 00:10:26.523 "num_base_bdevs": 3, 00:10:26.523 "num_base_bdevs_discovered": 3, 00:10:26.523 "num_base_bdevs_operational": 3, 00:10:26.523 "base_bdevs_list": [ 00:10:26.523 { 00:10:26.523 "name": "BaseBdev1", 00:10:26.523 "uuid": "e61a8e76-2824-5566-ac21-b3d50942b867", 00:10:26.523 "is_configured": true, 00:10:26.523 "data_offset": 2048, 00:10:26.523 "data_size": 63488 00:10:26.523 }, 00:10:26.523 { 00:10:26.523 "name": "BaseBdev2", 00:10:26.523 "uuid": "fe86d925-ad2d-5964-b254-eb1d78f74109", 00:10:26.523 "is_configured": true, 00:10:26.523 "data_offset": 2048, 00:10:26.523 "data_size": 63488 00:10:26.523 }, 00:10:26.523 { 00:10:26.523 "name": "BaseBdev3", 00:10:26.523 "uuid": "0e1e7a5b-5432-5b64-8b69-702ba74aea98", 00:10:26.523 "is_configured": true, 00:10:26.523 "data_offset": 2048, 00:10:26.523 "data_size": 63488 00:10:26.523 } 00:10:26.523 ] 00:10:26.523 }' 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.523 09:29:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.782 09:29:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:26.782 09:29:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.782 09:29:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.782 [2024-11-15 09:29:15.232491] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:26.782 [2024-11-15 09:29:15.232545] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:26.782 [2024-11-15 09:29:15.235562] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.782 [2024-11-15 09:29:15.235615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.782 [2024-11-15 09:29:15.235726] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.782 [2024-11-15 09:29:15.235738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:26.782 { 00:10:26.782 "results": [ 00:10:26.782 { 00:10:26.782 "job": "raid_bdev1", 00:10:26.782 "core_mask": "0x1", 00:10:26.782 "workload": "randrw", 00:10:26.782 "percentage": 50, 00:10:26.782 "status": "finished", 00:10:26.782 "queue_depth": 1, 00:10:26.782 "io_size": 131072, 00:10:26.782 "runtime": 1.374734, 00:10:26.782 "iops": 11549.870738630165, 00:10:26.782 "mibps": 1443.7338423287706, 00:10:26.782 "io_failed": 0, 00:10:26.782 "io_timeout": 0, 00:10:26.782 "avg_latency_us": 83.39458062046246, 00:10:26.782 "min_latency_us": 26.606113537117903, 00:10:26.782 "max_latency_us": 1717.1004366812226 00:10:26.782 } 00:10:26.782 ], 00:10:26.782 "core_count": 1 00:10:26.782 } 00:10:26.782 09:29:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.782 09:29:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69432 00:10:26.782 09:29:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 69432 ']' 00:10:26.782 09:29:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 69432 00:10:26.782 09:29:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:10:26.782 09:29:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:26.782 09:29:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69432 00:10:27.041 killing process with pid 69432 00:10:27.041 09:29:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:27.041 09:29:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:27.041 09:29:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69432' 00:10:27.041 09:29:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 69432 00:10:27.041 09:29:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 69432 00:10:27.041 [2024-11-15 09:29:15.278033] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:27.300 [2024-11-15 09:29:15.557931] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:28.675 09:29:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0L2qlEtP2L 00:10:28.676 09:29:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:28.676 09:29:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:28.676 09:29:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:28.676 09:29:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:28.676 09:29:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:28.676 09:29:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:28.676 09:29:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:28.676 00:10:28.676 real 0m4.869s 00:10:28.676 user 0m5.744s 00:10:28.676 sys 0m0.591s 00:10:28.676 09:29:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:28.676 09:29:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.676 ************************************ 00:10:28.676 END TEST raid_read_error_test 00:10:28.676 ************************************ 00:10:28.676 09:29:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:28.676 09:29:17 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:28.676 09:29:17 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:28.676 09:29:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:28.676 ************************************ 00:10:28.676 START TEST raid_write_error_test 00:10:28.676 ************************************ 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 write 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.FlMnTNO0aE 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69578 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69578 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 69578 ']' 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:28.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:28.676 09:29:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.934 [2024-11-15 09:29:17.144408] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:10:28.935 [2024-11-15 09:29:17.144542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69578 ] 00:10:28.935 [2024-11-15 09:29:17.308449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.193 [2024-11-15 09:29:17.439917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.452 [2024-11-15 09:29:17.673810] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.452 [2024-11-15 09:29:17.673895] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.711 BaseBdev1_malloc 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.711 true 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.711 [2024-11-15 09:29:18.088058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:29.711 [2024-11-15 09:29:18.088129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.711 [2024-11-15 09:29:18.088151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:29.711 [2024-11-15 09:29:18.088163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.711 [2024-11-15 09:29:18.090317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.711 [2024-11-15 09:29:18.090364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:29.711 BaseBdev1 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.711 BaseBdev2_malloc 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.711 true 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.711 [2024-11-15 09:29:18.153482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:29.711 [2024-11-15 09:29:18.153548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.711 [2024-11-15 09:29:18.153566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:29.711 [2024-11-15 09:29:18.153577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.711 [2024-11-15 09:29:18.155665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.711 [2024-11-15 09:29:18.155809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:29.711 BaseBdev2 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.711 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.970 BaseBdev3_malloc 00:10:29.970 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.970 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:29.970 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.970 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.970 true 00:10:29.970 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.970 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:29.971 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.971 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.971 [2024-11-15 09:29:18.228041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:29.971 [2024-11-15 09:29:18.228107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.971 [2024-11-15 09:29:18.228125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:29.971 [2024-11-15 09:29:18.228136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.971 [2024-11-15 09:29:18.230244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.971 [2024-11-15 09:29:18.230284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:29.971 BaseBdev3 00:10:29.971 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.971 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:29.971 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.971 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.971 [2024-11-15 09:29:18.240082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:29.971 [2024-11-15 09:29:18.241959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.971 [2024-11-15 09:29:18.242050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:29.971 [2024-11-15 09:29:18.242348] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:29.971 [2024-11-15 09:29:18.242403] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:29.971 [2024-11-15 09:29:18.242730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:29.971 [2024-11-15 09:29:18.242954] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:29.971 [2024-11-15 09:29:18.243004] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:29.971 [2024-11-15 09:29:18.243198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.971 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.971 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:29.971 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.971 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.971 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.971 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.971 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.971 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.971 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.971 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.971 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.971 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.971 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.971 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.971 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.971 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.971 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.971 "name": "raid_bdev1", 00:10:29.971 "uuid": "7eba0644-395b-4179-a070-164775e960d2", 00:10:29.971 "strip_size_kb": 0, 00:10:29.971 "state": "online", 00:10:29.971 "raid_level": "raid1", 00:10:29.971 "superblock": true, 00:10:29.971 "num_base_bdevs": 3, 00:10:29.971 "num_base_bdevs_discovered": 3, 00:10:29.971 "num_base_bdevs_operational": 3, 00:10:29.971 "base_bdevs_list": [ 00:10:29.971 { 00:10:29.971 "name": "BaseBdev1", 00:10:29.971 "uuid": "02db0674-daa6-5335-a7c8-a861dafe8ebf", 00:10:29.971 "is_configured": true, 00:10:29.971 "data_offset": 2048, 00:10:29.971 "data_size": 63488 00:10:29.971 }, 00:10:29.971 { 00:10:29.971 "name": "BaseBdev2", 00:10:29.971 "uuid": "f02df8c1-15eb-5be6-bb12-3075121c6955", 00:10:29.971 "is_configured": true, 00:10:29.971 "data_offset": 2048, 00:10:29.971 "data_size": 63488 00:10:29.971 }, 00:10:29.971 { 00:10:29.971 "name": "BaseBdev3", 00:10:29.971 "uuid": "46e9e211-19e4-5f62-bb35-42adea4bfc62", 00:10:29.971 "is_configured": true, 00:10:29.971 "data_offset": 2048, 00:10:29.971 "data_size": 63488 00:10:29.971 } 00:10:29.971 ] 00:10:29.971 }' 00:10:29.971 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.971 09:29:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.539 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:30.539 09:29:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:30.539 [2024-11-15 09:29:18.856368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.474 [2024-11-15 09:29:19.764841] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:31.474 [2024-11-15 09:29:19.764930] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:31.474 [2024-11-15 09:29:19.765163] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.474 "name": "raid_bdev1", 00:10:31.474 "uuid": "7eba0644-395b-4179-a070-164775e960d2", 00:10:31.474 "strip_size_kb": 0, 00:10:31.474 "state": "online", 00:10:31.474 "raid_level": "raid1", 00:10:31.474 "superblock": true, 00:10:31.474 "num_base_bdevs": 3, 00:10:31.474 "num_base_bdevs_discovered": 2, 00:10:31.474 "num_base_bdevs_operational": 2, 00:10:31.474 "base_bdevs_list": [ 00:10:31.474 { 00:10:31.474 "name": null, 00:10:31.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.474 "is_configured": false, 00:10:31.474 "data_offset": 0, 00:10:31.474 "data_size": 63488 00:10:31.474 }, 00:10:31.474 { 00:10:31.474 "name": "BaseBdev2", 00:10:31.474 "uuid": "f02df8c1-15eb-5be6-bb12-3075121c6955", 00:10:31.474 "is_configured": true, 00:10:31.474 "data_offset": 2048, 00:10:31.474 "data_size": 63488 00:10:31.474 }, 00:10:31.474 { 00:10:31.474 "name": "BaseBdev3", 00:10:31.474 "uuid": "46e9e211-19e4-5f62-bb35-42adea4bfc62", 00:10:31.474 "is_configured": true, 00:10:31.474 "data_offset": 2048, 00:10:31.474 "data_size": 63488 00:10:31.474 } 00:10:31.474 ] 00:10:31.474 }' 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.474 09:29:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.043 09:29:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:32.043 09:29:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.043 09:29:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.043 [2024-11-15 09:29:20.289466] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:32.043 [2024-11-15 09:29:20.289521] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.043 [2024-11-15 09:29:20.292542] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.043 { 00:10:32.043 "results": [ 00:10:32.043 { 00:10:32.043 "job": "raid_bdev1", 00:10:32.043 "core_mask": "0x1", 00:10:32.043 "workload": "randrw", 00:10:32.043 "percentage": 50, 00:10:32.043 "status": "finished", 00:10:32.043 "queue_depth": 1, 00:10:32.043 "io_size": 131072, 00:10:32.043 "runtime": 1.434084, 00:10:32.043 "iops": 13239.112911098653, 00:10:32.043 "mibps": 1654.8891138873316, 00:10:32.043 "io_failed": 0, 00:10:32.043 "io_timeout": 0, 00:10:32.043 "avg_latency_us": 72.3956227916962, 00:10:32.043 "min_latency_us": 24.705676855895195, 00:10:32.043 "max_latency_us": 1695.6366812227075 00:10:32.043 } 00:10:32.043 ], 00:10:32.043 "core_count": 1 00:10:32.043 } 00:10:32.043 [2024-11-15 09:29:20.292730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.043 [2024-11-15 09:29:20.292861] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.043 [2024-11-15 09:29:20.292880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:32.043 09:29:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.043 09:29:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69578 00:10:32.043 09:29:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 69578 ']' 00:10:32.043 09:29:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 69578 00:10:32.043 09:29:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:10:32.043 09:29:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:32.043 09:29:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69578 00:10:32.043 09:29:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:32.043 killing process with pid 69578 00:10:32.043 09:29:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:32.043 09:29:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69578' 00:10:32.043 09:29:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 69578 00:10:32.043 [2024-11-15 09:29:20.340635] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:32.043 09:29:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 69578 00:10:32.301 [2024-11-15 09:29:20.610204] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:33.679 09:29:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:33.679 09:29:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.FlMnTNO0aE 00:10:33.679 09:29:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:33.679 09:29:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:33.679 09:29:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:33.679 09:29:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:33.679 09:29:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:33.679 09:29:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:33.679 00:10:33.679 real 0m4.946s 00:10:33.679 user 0m5.917s 00:10:33.679 sys 0m0.612s 00:10:33.679 09:29:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:33.679 09:29:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.679 ************************************ 00:10:33.679 END TEST raid_write_error_test 00:10:33.679 ************************************ 00:10:33.679 09:29:22 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:33.679 09:29:22 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:33.679 09:29:22 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:33.679 09:29:22 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:33.679 09:29:22 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:33.679 09:29:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:33.679 ************************************ 00:10:33.679 START TEST raid_state_function_test 00:10:33.679 ************************************ 00:10:33.679 09:29:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 false 00:10:33.679 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:33.679 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:33.679 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69727 00:10:33.680 Process raid pid: 69727 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69727' 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69727 00:10:33.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 69727 ']' 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:33.680 09:29:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.680 [2024-11-15 09:29:22.130979] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:10:33.680 [2024-11-15 09:29:22.131116] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.937 [2024-11-15 09:29:22.298744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.196 [2024-11-15 09:29:22.437764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.454 [2024-11-15 09:29:22.676721] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.454 [2024-11-15 09:29:22.676775] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.713 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:34.713 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:10:34.713 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:34.713 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.713 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.713 [2024-11-15 09:29:23.041325] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:34.713 [2024-11-15 09:29:23.041395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:34.713 [2024-11-15 09:29:23.041410] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:34.713 [2024-11-15 09:29:23.041426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:34.713 [2024-11-15 09:29:23.041438] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:34.713 [2024-11-15 09:29:23.041453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:34.713 [2024-11-15 09:29:23.041462] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:34.713 [2024-11-15 09:29:23.041489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:34.713 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.713 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:34.713 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.713 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.713 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.713 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.713 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.713 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.713 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.713 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.713 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.713 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.713 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.713 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.713 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.713 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.713 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.714 "name": "Existed_Raid", 00:10:34.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.714 "strip_size_kb": 64, 00:10:34.714 "state": "configuring", 00:10:34.714 "raid_level": "raid0", 00:10:34.714 "superblock": false, 00:10:34.714 "num_base_bdevs": 4, 00:10:34.714 "num_base_bdevs_discovered": 0, 00:10:34.714 "num_base_bdevs_operational": 4, 00:10:34.714 "base_bdevs_list": [ 00:10:34.714 { 00:10:34.714 "name": "BaseBdev1", 00:10:34.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.714 "is_configured": false, 00:10:34.714 "data_offset": 0, 00:10:34.714 "data_size": 0 00:10:34.714 }, 00:10:34.714 { 00:10:34.714 "name": "BaseBdev2", 00:10:34.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.714 "is_configured": false, 00:10:34.714 "data_offset": 0, 00:10:34.714 "data_size": 0 00:10:34.714 }, 00:10:34.714 { 00:10:34.714 "name": "BaseBdev3", 00:10:34.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.714 "is_configured": false, 00:10:34.714 "data_offset": 0, 00:10:34.714 "data_size": 0 00:10:34.714 }, 00:10:34.714 { 00:10:34.714 "name": "BaseBdev4", 00:10:34.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.714 "is_configured": false, 00:10:34.714 "data_offset": 0, 00:10:34.714 "data_size": 0 00:10:34.714 } 00:10:34.714 ] 00:10:34.714 }' 00:10:34.714 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.714 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.280 [2024-11-15 09:29:23.492519] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:35.280 [2024-11-15 09:29:23.492573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.280 [2024-11-15 09:29:23.504544] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.280 [2024-11-15 09:29:23.504605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.280 [2024-11-15 09:29:23.504618] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.280 [2024-11-15 09:29:23.504629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.280 [2024-11-15 09:29:23.504637] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:35.280 [2024-11-15 09:29:23.504647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:35.280 [2024-11-15 09:29:23.504655] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:35.280 [2024-11-15 09:29:23.504665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.280 [2024-11-15 09:29:23.555838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.280 BaseBdev1 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.280 [ 00:10:35.280 { 00:10:35.280 "name": "BaseBdev1", 00:10:35.280 "aliases": [ 00:10:35.280 "060b3db6-6e37-45d9-b382-0c75b580e25d" 00:10:35.280 ], 00:10:35.280 "product_name": "Malloc disk", 00:10:35.280 "block_size": 512, 00:10:35.280 "num_blocks": 65536, 00:10:35.280 "uuid": "060b3db6-6e37-45d9-b382-0c75b580e25d", 00:10:35.280 "assigned_rate_limits": { 00:10:35.280 "rw_ios_per_sec": 0, 00:10:35.280 "rw_mbytes_per_sec": 0, 00:10:35.280 "r_mbytes_per_sec": 0, 00:10:35.280 "w_mbytes_per_sec": 0 00:10:35.280 }, 00:10:35.280 "claimed": true, 00:10:35.280 "claim_type": "exclusive_write", 00:10:35.280 "zoned": false, 00:10:35.280 "supported_io_types": { 00:10:35.280 "read": true, 00:10:35.280 "write": true, 00:10:35.280 "unmap": true, 00:10:35.280 "flush": true, 00:10:35.280 "reset": true, 00:10:35.280 "nvme_admin": false, 00:10:35.280 "nvme_io": false, 00:10:35.280 "nvme_io_md": false, 00:10:35.280 "write_zeroes": true, 00:10:35.280 "zcopy": true, 00:10:35.280 "get_zone_info": false, 00:10:35.280 "zone_management": false, 00:10:35.280 "zone_append": false, 00:10:35.280 "compare": false, 00:10:35.280 "compare_and_write": false, 00:10:35.280 "abort": true, 00:10:35.280 "seek_hole": false, 00:10:35.280 "seek_data": false, 00:10:35.280 "copy": true, 00:10:35.280 "nvme_iov_md": false 00:10:35.280 }, 00:10:35.280 "memory_domains": [ 00:10:35.280 { 00:10:35.280 "dma_device_id": "system", 00:10:35.280 "dma_device_type": 1 00:10:35.280 }, 00:10:35.280 { 00:10:35.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.280 "dma_device_type": 2 00:10:35.280 } 00:10:35.280 ], 00:10:35.280 "driver_specific": {} 00:10:35.280 } 00:10:35.280 ] 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.280 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.280 "name": "Existed_Raid", 00:10:35.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.280 "strip_size_kb": 64, 00:10:35.280 "state": "configuring", 00:10:35.280 "raid_level": "raid0", 00:10:35.280 "superblock": false, 00:10:35.280 "num_base_bdevs": 4, 00:10:35.280 "num_base_bdevs_discovered": 1, 00:10:35.280 "num_base_bdevs_operational": 4, 00:10:35.280 "base_bdevs_list": [ 00:10:35.280 { 00:10:35.280 "name": "BaseBdev1", 00:10:35.280 "uuid": "060b3db6-6e37-45d9-b382-0c75b580e25d", 00:10:35.280 "is_configured": true, 00:10:35.280 "data_offset": 0, 00:10:35.280 "data_size": 65536 00:10:35.280 }, 00:10:35.280 { 00:10:35.280 "name": "BaseBdev2", 00:10:35.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.281 "is_configured": false, 00:10:35.281 "data_offset": 0, 00:10:35.281 "data_size": 0 00:10:35.281 }, 00:10:35.281 { 00:10:35.281 "name": "BaseBdev3", 00:10:35.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.281 "is_configured": false, 00:10:35.281 "data_offset": 0, 00:10:35.281 "data_size": 0 00:10:35.281 }, 00:10:35.281 { 00:10:35.281 "name": "BaseBdev4", 00:10:35.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.281 "is_configured": false, 00:10:35.281 "data_offset": 0, 00:10:35.281 "data_size": 0 00:10:35.281 } 00:10:35.281 ] 00:10:35.281 }' 00:10:35.281 09:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.281 09:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.847 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:35.847 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.847 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.847 [2024-11-15 09:29:24.007205] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:35.847 [2024-11-15 09:29:24.007333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:35.847 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.847 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:35.847 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.847 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.847 [2024-11-15 09:29:24.015241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.847 [2024-11-15 09:29:24.017615] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.847 [2024-11-15 09:29:24.017704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.847 [2024-11-15 09:29:24.017743] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:35.847 [2024-11-15 09:29:24.017772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:35.847 [2024-11-15 09:29:24.017812] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:35.847 [2024-11-15 09:29:24.017861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:35.847 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.847 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:35.847 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:35.847 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:35.847 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.847 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.847 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.847 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.847 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.847 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.847 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.847 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.847 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.847 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.847 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.847 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.848 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.848 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.848 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.848 "name": "Existed_Raid", 00:10:35.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.848 "strip_size_kb": 64, 00:10:35.848 "state": "configuring", 00:10:35.848 "raid_level": "raid0", 00:10:35.848 "superblock": false, 00:10:35.848 "num_base_bdevs": 4, 00:10:35.848 "num_base_bdevs_discovered": 1, 00:10:35.848 "num_base_bdevs_operational": 4, 00:10:35.848 "base_bdevs_list": [ 00:10:35.848 { 00:10:35.848 "name": "BaseBdev1", 00:10:35.848 "uuid": "060b3db6-6e37-45d9-b382-0c75b580e25d", 00:10:35.848 "is_configured": true, 00:10:35.848 "data_offset": 0, 00:10:35.848 "data_size": 65536 00:10:35.848 }, 00:10:35.848 { 00:10:35.848 "name": "BaseBdev2", 00:10:35.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.848 "is_configured": false, 00:10:35.848 "data_offset": 0, 00:10:35.848 "data_size": 0 00:10:35.848 }, 00:10:35.848 { 00:10:35.848 "name": "BaseBdev3", 00:10:35.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.848 "is_configured": false, 00:10:35.848 "data_offset": 0, 00:10:35.848 "data_size": 0 00:10:35.848 }, 00:10:35.848 { 00:10:35.848 "name": "BaseBdev4", 00:10:35.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.848 "is_configured": false, 00:10:35.848 "data_offset": 0, 00:10:35.848 "data_size": 0 00:10:35.848 } 00:10:35.848 ] 00:10:35.848 }' 00:10:35.848 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.848 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.106 [2024-11-15 09:29:24.454747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.106 BaseBdev2 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.106 [ 00:10:36.106 { 00:10:36.106 "name": "BaseBdev2", 00:10:36.106 "aliases": [ 00:10:36.106 "e7536e67-5403-4b4b-9886-30f8e45b348a" 00:10:36.106 ], 00:10:36.106 "product_name": "Malloc disk", 00:10:36.106 "block_size": 512, 00:10:36.106 "num_blocks": 65536, 00:10:36.106 "uuid": "e7536e67-5403-4b4b-9886-30f8e45b348a", 00:10:36.106 "assigned_rate_limits": { 00:10:36.106 "rw_ios_per_sec": 0, 00:10:36.106 "rw_mbytes_per_sec": 0, 00:10:36.106 "r_mbytes_per_sec": 0, 00:10:36.106 "w_mbytes_per_sec": 0 00:10:36.106 }, 00:10:36.106 "claimed": true, 00:10:36.106 "claim_type": "exclusive_write", 00:10:36.106 "zoned": false, 00:10:36.106 "supported_io_types": { 00:10:36.106 "read": true, 00:10:36.106 "write": true, 00:10:36.106 "unmap": true, 00:10:36.106 "flush": true, 00:10:36.106 "reset": true, 00:10:36.106 "nvme_admin": false, 00:10:36.106 "nvme_io": false, 00:10:36.106 "nvme_io_md": false, 00:10:36.106 "write_zeroes": true, 00:10:36.106 "zcopy": true, 00:10:36.106 "get_zone_info": false, 00:10:36.106 "zone_management": false, 00:10:36.106 "zone_append": false, 00:10:36.106 "compare": false, 00:10:36.106 "compare_and_write": false, 00:10:36.106 "abort": true, 00:10:36.106 "seek_hole": false, 00:10:36.106 "seek_data": false, 00:10:36.106 "copy": true, 00:10:36.106 "nvme_iov_md": false 00:10:36.106 }, 00:10:36.106 "memory_domains": [ 00:10:36.106 { 00:10:36.106 "dma_device_id": "system", 00:10:36.106 "dma_device_type": 1 00:10:36.106 }, 00:10:36.106 { 00:10:36.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.106 "dma_device_type": 2 00:10:36.106 } 00:10:36.106 ], 00:10:36.106 "driver_specific": {} 00:10:36.106 } 00:10:36.106 ] 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.106 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.107 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.107 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.107 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.107 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.107 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.107 "name": "Existed_Raid", 00:10:36.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.107 "strip_size_kb": 64, 00:10:36.107 "state": "configuring", 00:10:36.107 "raid_level": "raid0", 00:10:36.107 "superblock": false, 00:10:36.107 "num_base_bdevs": 4, 00:10:36.107 "num_base_bdevs_discovered": 2, 00:10:36.107 "num_base_bdevs_operational": 4, 00:10:36.107 "base_bdevs_list": [ 00:10:36.107 { 00:10:36.107 "name": "BaseBdev1", 00:10:36.107 "uuid": "060b3db6-6e37-45d9-b382-0c75b580e25d", 00:10:36.107 "is_configured": true, 00:10:36.107 "data_offset": 0, 00:10:36.107 "data_size": 65536 00:10:36.107 }, 00:10:36.107 { 00:10:36.107 "name": "BaseBdev2", 00:10:36.107 "uuid": "e7536e67-5403-4b4b-9886-30f8e45b348a", 00:10:36.107 "is_configured": true, 00:10:36.107 "data_offset": 0, 00:10:36.107 "data_size": 65536 00:10:36.107 }, 00:10:36.107 { 00:10:36.107 "name": "BaseBdev3", 00:10:36.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.107 "is_configured": false, 00:10:36.107 "data_offset": 0, 00:10:36.107 "data_size": 0 00:10:36.107 }, 00:10:36.107 { 00:10:36.107 "name": "BaseBdev4", 00:10:36.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.107 "is_configured": false, 00:10:36.107 "data_offset": 0, 00:10:36.107 "data_size": 0 00:10:36.107 } 00:10:36.107 ] 00:10:36.107 }' 00:10:36.107 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.107 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.674 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:36.674 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.674 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.674 [2024-11-15 09:29:24.995211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.674 BaseBdev3 00:10:36.674 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.674 09:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:36.674 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:36.674 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:36.674 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:36.674 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:36.674 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:36.674 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:36.674 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.674 09:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.674 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.674 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:36.674 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.674 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.674 [ 00:10:36.674 { 00:10:36.674 "name": "BaseBdev3", 00:10:36.674 "aliases": [ 00:10:36.674 "7c214e6d-2dc3-4652-b5bc-fcd4462da47c" 00:10:36.674 ], 00:10:36.674 "product_name": "Malloc disk", 00:10:36.674 "block_size": 512, 00:10:36.674 "num_blocks": 65536, 00:10:36.674 "uuid": "7c214e6d-2dc3-4652-b5bc-fcd4462da47c", 00:10:36.674 "assigned_rate_limits": { 00:10:36.674 "rw_ios_per_sec": 0, 00:10:36.674 "rw_mbytes_per_sec": 0, 00:10:36.674 "r_mbytes_per_sec": 0, 00:10:36.674 "w_mbytes_per_sec": 0 00:10:36.674 }, 00:10:36.674 "claimed": true, 00:10:36.674 "claim_type": "exclusive_write", 00:10:36.674 "zoned": false, 00:10:36.674 "supported_io_types": { 00:10:36.674 "read": true, 00:10:36.674 "write": true, 00:10:36.674 "unmap": true, 00:10:36.674 "flush": true, 00:10:36.674 "reset": true, 00:10:36.674 "nvme_admin": false, 00:10:36.674 "nvme_io": false, 00:10:36.674 "nvme_io_md": false, 00:10:36.674 "write_zeroes": true, 00:10:36.674 "zcopy": true, 00:10:36.674 "get_zone_info": false, 00:10:36.674 "zone_management": false, 00:10:36.675 "zone_append": false, 00:10:36.675 "compare": false, 00:10:36.675 "compare_and_write": false, 00:10:36.675 "abort": true, 00:10:36.675 "seek_hole": false, 00:10:36.675 "seek_data": false, 00:10:36.675 "copy": true, 00:10:36.675 "nvme_iov_md": false 00:10:36.675 }, 00:10:36.675 "memory_domains": [ 00:10:36.675 { 00:10:36.675 "dma_device_id": "system", 00:10:36.675 "dma_device_type": 1 00:10:36.675 }, 00:10:36.675 { 00:10:36.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.675 "dma_device_type": 2 00:10:36.675 } 00:10:36.675 ], 00:10:36.675 "driver_specific": {} 00:10:36.675 } 00:10:36.675 ] 00:10:36.675 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.675 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:36.675 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:36.675 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.675 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:36.675 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.675 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.675 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.675 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.675 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.675 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.675 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.675 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.675 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.675 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.675 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.675 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.675 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.675 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.675 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.675 "name": "Existed_Raid", 00:10:36.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.675 "strip_size_kb": 64, 00:10:36.675 "state": "configuring", 00:10:36.675 "raid_level": "raid0", 00:10:36.675 "superblock": false, 00:10:36.675 "num_base_bdevs": 4, 00:10:36.675 "num_base_bdevs_discovered": 3, 00:10:36.675 "num_base_bdevs_operational": 4, 00:10:36.675 "base_bdevs_list": [ 00:10:36.675 { 00:10:36.675 "name": "BaseBdev1", 00:10:36.675 "uuid": "060b3db6-6e37-45d9-b382-0c75b580e25d", 00:10:36.675 "is_configured": true, 00:10:36.675 "data_offset": 0, 00:10:36.675 "data_size": 65536 00:10:36.675 }, 00:10:36.675 { 00:10:36.675 "name": "BaseBdev2", 00:10:36.675 "uuid": "e7536e67-5403-4b4b-9886-30f8e45b348a", 00:10:36.675 "is_configured": true, 00:10:36.675 "data_offset": 0, 00:10:36.675 "data_size": 65536 00:10:36.675 }, 00:10:36.675 { 00:10:36.675 "name": "BaseBdev3", 00:10:36.675 "uuid": "7c214e6d-2dc3-4652-b5bc-fcd4462da47c", 00:10:36.675 "is_configured": true, 00:10:36.675 "data_offset": 0, 00:10:36.675 "data_size": 65536 00:10:36.675 }, 00:10:36.675 { 00:10:36.675 "name": "BaseBdev4", 00:10:36.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.675 "is_configured": false, 00:10:36.675 "data_offset": 0, 00:10:36.675 "data_size": 0 00:10:36.675 } 00:10:36.675 ] 00:10:36.675 }' 00:10:36.675 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.675 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.301 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:37.301 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.301 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.301 [2024-11-15 09:29:25.506843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:37.301 [2024-11-15 09:29:25.507021] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:37.301 [2024-11-15 09:29:25.507059] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:37.301 [2024-11-15 09:29:25.507430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:37.301 [2024-11-15 09:29:25.507686] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:37.301 [2024-11-15 09:29:25.507741] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:37.301 [2024-11-15 09:29:25.508143] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.301 BaseBdev4 00:10:37.301 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.301 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:37.301 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:10:37.301 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:37.301 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:37.301 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:37.301 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:37.301 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:37.301 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.301 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.301 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.301 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:37.301 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.301 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.301 [ 00:10:37.301 { 00:10:37.301 "name": "BaseBdev4", 00:10:37.301 "aliases": [ 00:10:37.301 "66502348-bc9b-4932-917f-a7c446bb7100" 00:10:37.301 ], 00:10:37.301 "product_name": "Malloc disk", 00:10:37.301 "block_size": 512, 00:10:37.301 "num_blocks": 65536, 00:10:37.301 "uuid": "66502348-bc9b-4932-917f-a7c446bb7100", 00:10:37.301 "assigned_rate_limits": { 00:10:37.301 "rw_ios_per_sec": 0, 00:10:37.301 "rw_mbytes_per_sec": 0, 00:10:37.301 "r_mbytes_per_sec": 0, 00:10:37.301 "w_mbytes_per_sec": 0 00:10:37.301 }, 00:10:37.301 "claimed": true, 00:10:37.301 "claim_type": "exclusive_write", 00:10:37.301 "zoned": false, 00:10:37.301 "supported_io_types": { 00:10:37.301 "read": true, 00:10:37.301 "write": true, 00:10:37.301 "unmap": true, 00:10:37.301 "flush": true, 00:10:37.301 "reset": true, 00:10:37.301 "nvme_admin": false, 00:10:37.301 "nvme_io": false, 00:10:37.301 "nvme_io_md": false, 00:10:37.301 "write_zeroes": true, 00:10:37.301 "zcopy": true, 00:10:37.301 "get_zone_info": false, 00:10:37.301 "zone_management": false, 00:10:37.301 "zone_append": false, 00:10:37.301 "compare": false, 00:10:37.301 "compare_and_write": false, 00:10:37.301 "abort": true, 00:10:37.301 "seek_hole": false, 00:10:37.301 "seek_data": false, 00:10:37.301 "copy": true, 00:10:37.301 "nvme_iov_md": false 00:10:37.301 }, 00:10:37.301 "memory_domains": [ 00:10:37.301 { 00:10:37.301 "dma_device_id": "system", 00:10:37.301 "dma_device_type": 1 00:10:37.301 }, 00:10:37.301 { 00:10:37.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.301 "dma_device_type": 2 00:10:37.301 } 00:10:37.301 ], 00:10:37.302 "driver_specific": {} 00:10:37.302 } 00:10:37.302 ] 00:10:37.302 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.302 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:37.302 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:37.302 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:37.302 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:37.302 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.302 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.302 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.302 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.302 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.302 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.302 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.302 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.302 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.302 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.302 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.302 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.302 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.302 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.302 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.302 "name": "Existed_Raid", 00:10:37.302 "uuid": "a4d18857-ccfd-4c53-ace9-bf4a1489eee9", 00:10:37.302 "strip_size_kb": 64, 00:10:37.302 "state": "online", 00:10:37.302 "raid_level": "raid0", 00:10:37.302 "superblock": false, 00:10:37.302 "num_base_bdevs": 4, 00:10:37.302 "num_base_bdevs_discovered": 4, 00:10:37.302 "num_base_bdevs_operational": 4, 00:10:37.302 "base_bdevs_list": [ 00:10:37.302 { 00:10:37.302 "name": "BaseBdev1", 00:10:37.302 "uuid": "060b3db6-6e37-45d9-b382-0c75b580e25d", 00:10:37.302 "is_configured": true, 00:10:37.302 "data_offset": 0, 00:10:37.302 "data_size": 65536 00:10:37.302 }, 00:10:37.302 { 00:10:37.302 "name": "BaseBdev2", 00:10:37.302 "uuid": "e7536e67-5403-4b4b-9886-30f8e45b348a", 00:10:37.302 "is_configured": true, 00:10:37.302 "data_offset": 0, 00:10:37.302 "data_size": 65536 00:10:37.302 }, 00:10:37.302 { 00:10:37.302 "name": "BaseBdev3", 00:10:37.302 "uuid": "7c214e6d-2dc3-4652-b5bc-fcd4462da47c", 00:10:37.302 "is_configured": true, 00:10:37.302 "data_offset": 0, 00:10:37.302 "data_size": 65536 00:10:37.302 }, 00:10:37.302 { 00:10:37.302 "name": "BaseBdev4", 00:10:37.302 "uuid": "66502348-bc9b-4932-917f-a7c446bb7100", 00:10:37.302 "is_configured": true, 00:10:37.302 "data_offset": 0, 00:10:37.302 "data_size": 65536 00:10:37.302 } 00:10:37.302 ] 00:10:37.302 }' 00:10:37.302 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.302 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.560 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:37.561 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:37.561 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:37.561 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:37.561 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:37.561 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:37.561 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:37.561 09:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.561 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.561 09:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:37.561 [2024-11-15 09:29:26.006508] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:37.561 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:37.820 "name": "Existed_Raid", 00:10:37.820 "aliases": [ 00:10:37.820 "a4d18857-ccfd-4c53-ace9-bf4a1489eee9" 00:10:37.820 ], 00:10:37.820 "product_name": "Raid Volume", 00:10:37.820 "block_size": 512, 00:10:37.820 "num_blocks": 262144, 00:10:37.820 "uuid": "a4d18857-ccfd-4c53-ace9-bf4a1489eee9", 00:10:37.820 "assigned_rate_limits": { 00:10:37.820 "rw_ios_per_sec": 0, 00:10:37.820 "rw_mbytes_per_sec": 0, 00:10:37.820 "r_mbytes_per_sec": 0, 00:10:37.820 "w_mbytes_per_sec": 0 00:10:37.820 }, 00:10:37.820 "claimed": false, 00:10:37.820 "zoned": false, 00:10:37.820 "supported_io_types": { 00:10:37.820 "read": true, 00:10:37.820 "write": true, 00:10:37.820 "unmap": true, 00:10:37.820 "flush": true, 00:10:37.820 "reset": true, 00:10:37.820 "nvme_admin": false, 00:10:37.820 "nvme_io": false, 00:10:37.820 "nvme_io_md": false, 00:10:37.820 "write_zeroes": true, 00:10:37.820 "zcopy": false, 00:10:37.820 "get_zone_info": false, 00:10:37.820 "zone_management": false, 00:10:37.820 "zone_append": false, 00:10:37.820 "compare": false, 00:10:37.820 "compare_and_write": false, 00:10:37.820 "abort": false, 00:10:37.820 "seek_hole": false, 00:10:37.820 "seek_data": false, 00:10:37.820 "copy": false, 00:10:37.820 "nvme_iov_md": false 00:10:37.820 }, 00:10:37.820 "memory_domains": [ 00:10:37.820 { 00:10:37.820 "dma_device_id": "system", 00:10:37.820 "dma_device_type": 1 00:10:37.820 }, 00:10:37.820 { 00:10:37.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.820 "dma_device_type": 2 00:10:37.820 }, 00:10:37.820 { 00:10:37.820 "dma_device_id": "system", 00:10:37.820 "dma_device_type": 1 00:10:37.820 }, 00:10:37.820 { 00:10:37.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.820 "dma_device_type": 2 00:10:37.820 }, 00:10:37.820 { 00:10:37.820 "dma_device_id": "system", 00:10:37.820 "dma_device_type": 1 00:10:37.820 }, 00:10:37.820 { 00:10:37.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.820 "dma_device_type": 2 00:10:37.820 }, 00:10:37.820 { 00:10:37.820 "dma_device_id": "system", 00:10:37.820 "dma_device_type": 1 00:10:37.820 }, 00:10:37.820 { 00:10:37.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.820 "dma_device_type": 2 00:10:37.820 } 00:10:37.820 ], 00:10:37.820 "driver_specific": { 00:10:37.820 "raid": { 00:10:37.820 "uuid": "a4d18857-ccfd-4c53-ace9-bf4a1489eee9", 00:10:37.820 "strip_size_kb": 64, 00:10:37.820 "state": "online", 00:10:37.820 "raid_level": "raid0", 00:10:37.820 "superblock": false, 00:10:37.820 "num_base_bdevs": 4, 00:10:37.820 "num_base_bdevs_discovered": 4, 00:10:37.820 "num_base_bdevs_operational": 4, 00:10:37.820 "base_bdevs_list": [ 00:10:37.820 { 00:10:37.820 "name": "BaseBdev1", 00:10:37.820 "uuid": "060b3db6-6e37-45d9-b382-0c75b580e25d", 00:10:37.820 "is_configured": true, 00:10:37.820 "data_offset": 0, 00:10:37.820 "data_size": 65536 00:10:37.820 }, 00:10:37.820 { 00:10:37.820 "name": "BaseBdev2", 00:10:37.820 "uuid": "e7536e67-5403-4b4b-9886-30f8e45b348a", 00:10:37.820 "is_configured": true, 00:10:37.820 "data_offset": 0, 00:10:37.820 "data_size": 65536 00:10:37.820 }, 00:10:37.820 { 00:10:37.820 "name": "BaseBdev3", 00:10:37.820 "uuid": "7c214e6d-2dc3-4652-b5bc-fcd4462da47c", 00:10:37.820 "is_configured": true, 00:10:37.820 "data_offset": 0, 00:10:37.820 "data_size": 65536 00:10:37.820 }, 00:10:37.820 { 00:10:37.820 "name": "BaseBdev4", 00:10:37.820 "uuid": "66502348-bc9b-4932-917f-a7c446bb7100", 00:10:37.820 "is_configured": true, 00:10:37.820 "data_offset": 0, 00:10:37.820 "data_size": 65536 00:10:37.820 } 00:10:37.820 ] 00:10:37.820 } 00:10:37.820 } 00:10:37.820 }' 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:37.820 BaseBdev2 00:10:37.820 BaseBdev3 00:10:37.820 BaseBdev4' 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.820 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.079 [2024-11-15 09:29:26.321657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:38.079 [2024-11-15 09:29:26.321751] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:38.079 [2024-11-15 09:29:26.321837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.079 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.079 "name": "Existed_Raid", 00:10:38.079 "uuid": "a4d18857-ccfd-4c53-ace9-bf4a1489eee9", 00:10:38.079 "strip_size_kb": 64, 00:10:38.079 "state": "offline", 00:10:38.079 "raid_level": "raid0", 00:10:38.079 "superblock": false, 00:10:38.079 "num_base_bdevs": 4, 00:10:38.079 "num_base_bdevs_discovered": 3, 00:10:38.079 "num_base_bdevs_operational": 3, 00:10:38.079 "base_bdevs_list": [ 00:10:38.079 { 00:10:38.079 "name": null, 00:10:38.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.079 "is_configured": false, 00:10:38.079 "data_offset": 0, 00:10:38.079 "data_size": 65536 00:10:38.079 }, 00:10:38.079 { 00:10:38.079 "name": "BaseBdev2", 00:10:38.079 "uuid": "e7536e67-5403-4b4b-9886-30f8e45b348a", 00:10:38.079 "is_configured": true, 00:10:38.079 "data_offset": 0, 00:10:38.079 "data_size": 65536 00:10:38.079 }, 00:10:38.079 { 00:10:38.079 "name": "BaseBdev3", 00:10:38.079 "uuid": "7c214e6d-2dc3-4652-b5bc-fcd4462da47c", 00:10:38.079 "is_configured": true, 00:10:38.079 "data_offset": 0, 00:10:38.079 "data_size": 65536 00:10:38.079 }, 00:10:38.079 { 00:10:38.079 "name": "BaseBdev4", 00:10:38.079 "uuid": "66502348-bc9b-4932-917f-a7c446bb7100", 00:10:38.079 "is_configured": true, 00:10:38.079 "data_offset": 0, 00:10:38.080 "data_size": 65536 00:10:38.080 } 00:10:38.080 ] 00:10:38.080 }' 00:10:38.080 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.080 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.646 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:38.646 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:38.646 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.646 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.646 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.646 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:38.646 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.646 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:38.646 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:38.646 09:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:38.646 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.646 09:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.646 [2024-11-15 09:29:26.951714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:38.646 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.646 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:38.646 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:38.646 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:38.646 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.646 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.646 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.646 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.905 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:38.905 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:38.905 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:38.905 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.905 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.905 [2024-11-15 09:29:27.124696] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:38.905 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.905 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:38.905 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:38.905 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:38.905 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.905 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.905 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.905 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.905 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:38.905 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:38.905 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:38.905 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.905 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.905 [2024-11-15 09:29:27.291089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:38.905 [2024-11-15 09:29:27.291193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:39.164 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.164 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:39.164 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:39.164 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.165 BaseBdev2 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.165 [ 00:10:39.165 { 00:10:39.165 "name": "BaseBdev2", 00:10:39.165 "aliases": [ 00:10:39.165 "3d0af6f6-f33f-448f-8a2e-3f2ca0170083" 00:10:39.165 ], 00:10:39.165 "product_name": "Malloc disk", 00:10:39.165 "block_size": 512, 00:10:39.165 "num_blocks": 65536, 00:10:39.165 "uuid": "3d0af6f6-f33f-448f-8a2e-3f2ca0170083", 00:10:39.165 "assigned_rate_limits": { 00:10:39.165 "rw_ios_per_sec": 0, 00:10:39.165 "rw_mbytes_per_sec": 0, 00:10:39.165 "r_mbytes_per_sec": 0, 00:10:39.165 "w_mbytes_per_sec": 0 00:10:39.165 }, 00:10:39.165 "claimed": false, 00:10:39.165 "zoned": false, 00:10:39.165 "supported_io_types": { 00:10:39.165 "read": true, 00:10:39.165 "write": true, 00:10:39.165 "unmap": true, 00:10:39.165 "flush": true, 00:10:39.165 "reset": true, 00:10:39.165 "nvme_admin": false, 00:10:39.165 "nvme_io": false, 00:10:39.165 "nvme_io_md": false, 00:10:39.165 "write_zeroes": true, 00:10:39.165 "zcopy": true, 00:10:39.165 "get_zone_info": false, 00:10:39.165 "zone_management": false, 00:10:39.165 "zone_append": false, 00:10:39.165 "compare": false, 00:10:39.165 "compare_and_write": false, 00:10:39.165 "abort": true, 00:10:39.165 "seek_hole": false, 00:10:39.165 "seek_data": false, 00:10:39.165 "copy": true, 00:10:39.165 "nvme_iov_md": false 00:10:39.165 }, 00:10:39.165 "memory_domains": [ 00:10:39.165 { 00:10:39.165 "dma_device_id": "system", 00:10:39.165 "dma_device_type": 1 00:10:39.165 }, 00:10:39.165 { 00:10:39.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.165 "dma_device_type": 2 00:10:39.165 } 00:10:39.165 ], 00:10:39.165 "driver_specific": {} 00:10:39.165 } 00:10:39.165 ] 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.165 BaseBdev3 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.165 [ 00:10:39.165 { 00:10:39.165 "name": "BaseBdev3", 00:10:39.165 "aliases": [ 00:10:39.165 "588094ca-94f4-4ddb-9e99-78f2ec69ffd9" 00:10:39.165 ], 00:10:39.165 "product_name": "Malloc disk", 00:10:39.165 "block_size": 512, 00:10:39.165 "num_blocks": 65536, 00:10:39.165 "uuid": "588094ca-94f4-4ddb-9e99-78f2ec69ffd9", 00:10:39.165 "assigned_rate_limits": { 00:10:39.165 "rw_ios_per_sec": 0, 00:10:39.165 "rw_mbytes_per_sec": 0, 00:10:39.165 "r_mbytes_per_sec": 0, 00:10:39.165 "w_mbytes_per_sec": 0 00:10:39.165 }, 00:10:39.165 "claimed": false, 00:10:39.165 "zoned": false, 00:10:39.165 "supported_io_types": { 00:10:39.165 "read": true, 00:10:39.165 "write": true, 00:10:39.165 "unmap": true, 00:10:39.165 "flush": true, 00:10:39.165 "reset": true, 00:10:39.165 "nvme_admin": false, 00:10:39.165 "nvme_io": false, 00:10:39.165 "nvme_io_md": false, 00:10:39.165 "write_zeroes": true, 00:10:39.165 "zcopy": true, 00:10:39.165 "get_zone_info": false, 00:10:39.165 "zone_management": false, 00:10:39.165 "zone_append": false, 00:10:39.165 "compare": false, 00:10:39.165 "compare_and_write": false, 00:10:39.165 "abort": true, 00:10:39.165 "seek_hole": false, 00:10:39.165 "seek_data": false, 00:10:39.165 "copy": true, 00:10:39.165 "nvme_iov_md": false 00:10:39.165 }, 00:10:39.165 "memory_domains": [ 00:10:39.165 { 00:10:39.165 "dma_device_id": "system", 00:10:39.165 "dma_device_type": 1 00:10:39.165 }, 00:10:39.165 { 00:10:39.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.165 "dma_device_type": 2 00:10:39.165 } 00:10:39.165 ], 00:10:39.165 "driver_specific": {} 00:10:39.165 } 00:10:39.165 ] 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.165 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.424 BaseBdev4 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.424 [ 00:10:39.424 { 00:10:39.424 "name": "BaseBdev4", 00:10:39.424 "aliases": [ 00:10:39.424 "dc6e1634-1056-4e6a-9799-6e974b059fa3" 00:10:39.424 ], 00:10:39.424 "product_name": "Malloc disk", 00:10:39.424 "block_size": 512, 00:10:39.424 "num_blocks": 65536, 00:10:39.424 "uuid": "dc6e1634-1056-4e6a-9799-6e974b059fa3", 00:10:39.424 "assigned_rate_limits": { 00:10:39.424 "rw_ios_per_sec": 0, 00:10:39.424 "rw_mbytes_per_sec": 0, 00:10:39.424 "r_mbytes_per_sec": 0, 00:10:39.424 "w_mbytes_per_sec": 0 00:10:39.424 }, 00:10:39.424 "claimed": false, 00:10:39.424 "zoned": false, 00:10:39.424 "supported_io_types": { 00:10:39.424 "read": true, 00:10:39.424 "write": true, 00:10:39.424 "unmap": true, 00:10:39.424 "flush": true, 00:10:39.424 "reset": true, 00:10:39.424 "nvme_admin": false, 00:10:39.424 "nvme_io": false, 00:10:39.424 "nvme_io_md": false, 00:10:39.424 "write_zeroes": true, 00:10:39.424 "zcopy": true, 00:10:39.424 "get_zone_info": false, 00:10:39.424 "zone_management": false, 00:10:39.424 "zone_append": false, 00:10:39.424 "compare": false, 00:10:39.424 "compare_and_write": false, 00:10:39.424 "abort": true, 00:10:39.424 "seek_hole": false, 00:10:39.424 "seek_data": false, 00:10:39.424 "copy": true, 00:10:39.424 "nvme_iov_md": false 00:10:39.424 }, 00:10:39.424 "memory_domains": [ 00:10:39.424 { 00:10:39.424 "dma_device_id": "system", 00:10:39.424 "dma_device_type": 1 00:10:39.424 }, 00:10:39.424 { 00:10:39.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.424 "dma_device_type": 2 00:10:39.424 } 00:10:39.424 ], 00:10:39.424 "driver_specific": {} 00:10:39.424 } 00:10:39.424 ] 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.424 [2024-11-15 09:29:27.702157] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:39.424 [2024-11-15 09:29:27.702263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:39.424 [2024-11-15 09:29:27.702336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.424 [2024-11-15 09:29:27.704617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:39.424 [2024-11-15 09:29:27.704737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.424 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.424 "name": "Existed_Raid", 00:10:39.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.424 "strip_size_kb": 64, 00:10:39.424 "state": "configuring", 00:10:39.424 "raid_level": "raid0", 00:10:39.424 "superblock": false, 00:10:39.424 "num_base_bdevs": 4, 00:10:39.424 "num_base_bdevs_discovered": 3, 00:10:39.424 "num_base_bdevs_operational": 4, 00:10:39.424 "base_bdevs_list": [ 00:10:39.424 { 00:10:39.424 "name": "BaseBdev1", 00:10:39.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.424 "is_configured": false, 00:10:39.424 "data_offset": 0, 00:10:39.424 "data_size": 0 00:10:39.424 }, 00:10:39.424 { 00:10:39.424 "name": "BaseBdev2", 00:10:39.424 "uuid": "3d0af6f6-f33f-448f-8a2e-3f2ca0170083", 00:10:39.424 "is_configured": true, 00:10:39.424 "data_offset": 0, 00:10:39.424 "data_size": 65536 00:10:39.424 }, 00:10:39.424 { 00:10:39.424 "name": "BaseBdev3", 00:10:39.424 "uuid": "588094ca-94f4-4ddb-9e99-78f2ec69ffd9", 00:10:39.424 "is_configured": true, 00:10:39.424 "data_offset": 0, 00:10:39.424 "data_size": 65536 00:10:39.424 }, 00:10:39.424 { 00:10:39.424 "name": "BaseBdev4", 00:10:39.424 "uuid": "dc6e1634-1056-4e6a-9799-6e974b059fa3", 00:10:39.424 "is_configured": true, 00:10:39.424 "data_offset": 0, 00:10:39.424 "data_size": 65536 00:10:39.424 } 00:10:39.424 ] 00:10:39.424 }' 00:10:39.425 09:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.425 09:29:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.683 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:39.683 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.683 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.683 [2024-11-15 09:29:28.141415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:39.683 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.683 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:39.683 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.683 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.683 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.683 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.683 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.941 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.941 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.941 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.942 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.942 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.942 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.942 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.942 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.942 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.942 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.942 "name": "Existed_Raid", 00:10:39.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.942 "strip_size_kb": 64, 00:10:39.942 "state": "configuring", 00:10:39.942 "raid_level": "raid0", 00:10:39.942 "superblock": false, 00:10:39.942 "num_base_bdevs": 4, 00:10:39.942 "num_base_bdevs_discovered": 2, 00:10:39.942 "num_base_bdevs_operational": 4, 00:10:39.942 "base_bdevs_list": [ 00:10:39.942 { 00:10:39.942 "name": "BaseBdev1", 00:10:39.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.942 "is_configured": false, 00:10:39.942 "data_offset": 0, 00:10:39.942 "data_size": 0 00:10:39.942 }, 00:10:39.942 { 00:10:39.942 "name": null, 00:10:39.942 "uuid": "3d0af6f6-f33f-448f-8a2e-3f2ca0170083", 00:10:39.942 "is_configured": false, 00:10:39.942 "data_offset": 0, 00:10:39.942 "data_size": 65536 00:10:39.942 }, 00:10:39.942 { 00:10:39.942 "name": "BaseBdev3", 00:10:39.942 "uuid": "588094ca-94f4-4ddb-9e99-78f2ec69ffd9", 00:10:39.942 "is_configured": true, 00:10:39.942 "data_offset": 0, 00:10:39.942 "data_size": 65536 00:10:39.942 }, 00:10:39.942 { 00:10:39.942 "name": "BaseBdev4", 00:10:39.942 "uuid": "dc6e1634-1056-4e6a-9799-6e974b059fa3", 00:10:39.942 "is_configured": true, 00:10:39.942 "data_offset": 0, 00:10:39.942 "data_size": 65536 00:10:39.942 } 00:10:39.942 ] 00:10:39.942 }' 00:10:39.942 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.942 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.200 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:40.200 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.200 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.200 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.200 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.200 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:40.200 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:40.200 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.200 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.458 [2024-11-15 09:29:28.672360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.458 BaseBdev1 00:10:40.458 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.458 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:40.458 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:40.458 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:40.458 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:40.458 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:40.458 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:40.458 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:40.458 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.458 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.458 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.458 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:40.458 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.458 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.458 [ 00:10:40.458 { 00:10:40.458 "name": "BaseBdev1", 00:10:40.458 "aliases": [ 00:10:40.458 "d2b6afb3-8564-4831-bca3-2ce9e268ac70" 00:10:40.458 ], 00:10:40.458 "product_name": "Malloc disk", 00:10:40.458 "block_size": 512, 00:10:40.458 "num_blocks": 65536, 00:10:40.458 "uuid": "d2b6afb3-8564-4831-bca3-2ce9e268ac70", 00:10:40.458 "assigned_rate_limits": { 00:10:40.458 "rw_ios_per_sec": 0, 00:10:40.458 "rw_mbytes_per_sec": 0, 00:10:40.458 "r_mbytes_per_sec": 0, 00:10:40.458 "w_mbytes_per_sec": 0 00:10:40.458 }, 00:10:40.458 "claimed": true, 00:10:40.458 "claim_type": "exclusive_write", 00:10:40.458 "zoned": false, 00:10:40.458 "supported_io_types": { 00:10:40.458 "read": true, 00:10:40.458 "write": true, 00:10:40.458 "unmap": true, 00:10:40.458 "flush": true, 00:10:40.458 "reset": true, 00:10:40.458 "nvme_admin": false, 00:10:40.458 "nvme_io": false, 00:10:40.458 "nvme_io_md": false, 00:10:40.458 "write_zeroes": true, 00:10:40.459 "zcopy": true, 00:10:40.459 "get_zone_info": false, 00:10:40.459 "zone_management": false, 00:10:40.459 "zone_append": false, 00:10:40.459 "compare": false, 00:10:40.459 "compare_and_write": false, 00:10:40.459 "abort": true, 00:10:40.459 "seek_hole": false, 00:10:40.459 "seek_data": false, 00:10:40.459 "copy": true, 00:10:40.459 "nvme_iov_md": false 00:10:40.459 }, 00:10:40.459 "memory_domains": [ 00:10:40.459 { 00:10:40.459 "dma_device_id": "system", 00:10:40.459 "dma_device_type": 1 00:10:40.459 }, 00:10:40.459 { 00:10:40.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.459 "dma_device_type": 2 00:10:40.459 } 00:10:40.459 ], 00:10:40.459 "driver_specific": {} 00:10:40.459 } 00:10:40.459 ] 00:10:40.459 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.459 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:40.459 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:40.459 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.459 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.459 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.459 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.459 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.459 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.459 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.459 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.459 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.459 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.459 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.459 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.459 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.459 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.459 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.459 "name": "Existed_Raid", 00:10:40.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.459 "strip_size_kb": 64, 00:10:40.459 "state": "configuring", 00:10:40.459 "raid_level": "raid0", 00:10:40.459 "superblock": false, 00:10:40.459 "num_base_bdevs": 4, 00:10:40.459 "num_base_bdevs_discovered": 3, 00:10:40.459 "num_base_bdevs_operational": 4, 00:10:40.459 "base_bdevs_list": [ 00:10:40.459 { 00:10:40.459 "name": "BaseBdev1", 00:10:40.459 "uuid": "d2b6afb3-8564-4831-bca3-2ce9e268ac70", 00:10:40.459 "is_configured": true, 00:10:40.459 "data_offset": 0, 00:10:40.459 "data_size": 65536 00:10:40.459 }, 00:10:40.459 { 00:10:40.459 "name": null, 00:10:40.459 "uuid": "3d0af6f6-f33f-448f-8a2e-3f2ca0170083", 00:10:40.459 "is_configured": false, 00:10:40.459 "data_offset": 0, 00:10:40.459 "data_size": 65536 00:10:40.459 }, 00:10:40.459 { 00:10:40.459 "name": "BaseBdev3", 00:10:40.459 "uuid": "588094ca-94f4-4ddb-9e99-78f2ec69ffd9", 00:10:40.459 "is_configured": true, 00:10:40.459 "data_offset": 0, 00:10:40.459 "data_size": 65536 00:10:40.459 }, 00:10:40.459 { 00:10:40.459 "name": "BaseBdev4", 00:10:40.459 "uuid": "dc6e1634-1056-4e6a-9799-6e974b059fa3", 00:10:40.459 "is_configured": true, 00:10:40.459 "data_offset": 0, 00:10:40.459 "data_size": 65536 00:10:40.459 } 00:10:40.459 ] 00:10:40.459 }' 00:10:40.459 09:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.459 09:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.028 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.028 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:41.028 09:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.028 09:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.028 09:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.028 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:41.028 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:41.028 09:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.028 09:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.028 [2024-11-15 09:29:29.219528] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:41.028 09:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.028 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:41.028 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.028 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.028 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.028 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.028 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.028 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.028 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.029 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.029 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.029 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.029 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.029 09:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.029 09:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.029 09:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.029 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.029 "name": "Existed_Raid", 00:10:41.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.029 "strip_size_kb": 64, 00:10:41.029 "state": "configuring", 00:10:41.029 "raid_level": "raid0", 00:10:41.029 "superblock": false, 00:10:41.029 "num_base_bdevs": 4, 00:10:41.029 "num_base_bdevs_discovered": 2, 00:10:41.029 "num_base_bdevs_operational": 4, 00:10:41.029 "base_bdevs_list": [ 00:10:41.029 { 00:10:41.029 "name": "BaseBdev1", 00:10:41.029 "uuid": "d2b6afb3-8564-4831-bca3-2ce9e268ac70", 00:10:41.029 "is_configured": true, 00:10:41.029 "data_offset": 0, 00:10:41.029 "data_size": 65536 00:10:41.029 }, 00:10:41.029 { 00:10:41.029 "name": null, 00:10:41.029 "uuid": "3d0af6f6-f33f-448f-8a2e-3f2ca0170083", 00:10:41.029 "is_configured": false, 00:10:41.029 "data_offset": 0, 00:10:41.029 "data_size": 65536 00:10:41.029 }, 00:10:41.029 { 00:10:41.029 "name": null, 00:10:41.029 "uuid": "588094ca-94f4-4ddb-9e99-78f2ec69ffd9", 00:10:41.029 "is_configured": false, 00:10:41.029 "data_offset": 0, 00:10:41.029 "data_size": 65536 00:10:41.029 }, 00:10:41.029 { 00:10:41.029 "name": "BaseBdev4", 00:10:41.029 "uuid": "dc6e1634-1056-4e6a-9799-6e974b059fa3", 00:10:41.029 "is_configured": true, 00:10:41.029 "data_offset": 0, 00:10:41.029 "data_size": 65536 00:10:41.029 } 00:10:41.029 ] 00:10:41.029 }' 00:10:41.029 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.029 09:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.287 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.287 09:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.287 09:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.287 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:41.287 09:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.287 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:41.287 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:41.287 09:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.287 09:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.287 [2024-11-15 09:29:29.738700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:41.287 09:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.287 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:41.287 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.287 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.287 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.287 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.287 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.287 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.287 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.287 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.287 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.287 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.287 09:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.287 09:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.287 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.545 09:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.545 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.545 "name": "Existed_Raid", 00:10:41.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.545 "strip_size_kb": 64, 00:10:41.545 "state": "configuring", 00:10:41.545 "raid_level": "raid0", 00:10:41.545 "superblock": false, 00:10:41.545 "num_base_bdevs": 4, 00:10:41.545 "num_base_bdevs_discovered": 3, 00:10:41.545 "num_base_bdevs_operational": 4, 00:10:41.545 "base_bdevs_list": [ 00:10:41.545 { 00:10:41.545 "name": "BaseBdev1", 00:10:41.545 "uuid": "d2b6afb3-8564-4831-bca3-2ce9e268ac70", 00:10:41.545 "is_configured": true, 00:10:41.545 "data_offset": 0, 00:10:41.545 "data_size": 65536 00:10:41.545 }, 00:10:41.545 { 00:10:41.545 "name": null, 00:10:41.545 "uuid": "3d0af6f6-f33f-448f-8a2e-3f2ca0170083", 00:10:41.545 "is_configured": false, 00:10:41.545 "data_offset": 0, 00:10:41.545 "data_size": 65536 00:10:41.545 }, 00:10:41.545 { 00:10:41.545 "name": "BaseBdev3", 00:10:41.545 "uuid": "588094ca-94f4-4ddb-9e99-78f2ec69ffd9", 00:10:41.545 "is_configured": true, 00:10:41.545 "data_offset": 0, 00:10:41.545 "data_size": 65536 00:10:41.545 }, 00:10:41.545 { 00:10:41.545 "name": "BaseBdev4", 00:10:41.545 "uuid": "dc6e1634-1056-4e6a-9799-6e974b059fa3", 00:10:41.545 "is_configured": true, 00:10:41.545 "data_offset": 0, 00:10:41.545 "data_size": 65536 00:10:41.545 } 00:10:41.545 ] 00:10:41.545 }' 00:10:41.545 09:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.545 09:29:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.803 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.803 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:41.803 09:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.803 09:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.803 09:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.803 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:41.803 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:41.803 09:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.803 09:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.803 [2024-11-15 09:29:30.265955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:42.061 09:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.061 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:42.061 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.061 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.061 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.061 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.061 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.061 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.061 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.061 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.061 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.061 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.061 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.061 09:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.061 09:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.061 09:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.061 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.061 "name": "Existed_Raid", 00:10:42.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.061 "strip_size_kb": 64, 00:10:42.061 "state": "configuring", 00:10:42.061 "raid_level": "raid0", 00:10:42.061 "superblock": false, 00:10:42.061 "num_base_bdevs": 4, 00:10:42.061 "num_base_bdevs_discovered": 2, 00:10:42.061 "num_base_bdevs_operational": 4, 00:10:42.061 "base_bdevs_list": [ 00:10:42.061 { 00:10:42.061 "name": null, 00:10:42.061 "uuid": "d2b6afb3-8564-4831-bca3-2ce9e268ac70", 00:10:42.061 "is_configured": false, 00:10:42.061 "data_offset": 0, 00:10:42.061 "data_size": 65536 00:10:42.061 }, 00:10:42.061 { 00:10:42.061 "name": null, 00:10:42.061 "uuid": "3d0af6f6-f33f-448f-8a2e-3f2ca0170083", 00:10:42.061 "is_configured": false, 00:10:42.061 "data_offset": 0, 00:10:42.061 "data_size": 65536 00:10:42.061 }, 00:10:42.061 { 00:10:42.061 "name": "BaseBdev3", 00:10:42.061 "uuid": "588094ca-94f4-4ddb-9e99-78f2ec69ffd9", 00:10:42.061 "is_configured": true, 00:10:42.061 "data_offset": 0, 00:10:42.061 "data_size": 65536 00:10:42.061 }, 00:10:42.061 { 00:10:42.061 "name": "BaseBdev4", 00:10:42.061 "uuid": "dc6e1634-1056-4e6a-9799-6e974b059fa3", 00:10:42.061 "is_configured": true, 00:10:42.061 "data_offset": 0, 00:10:42.061 "data_size": 65536 00:10:42.061 } 00:10:42.061 ] 00:10:42.061 }' 00:10:42.061 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.061 09:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.628 [2024-11-15 09:29:30.875184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.628 "name": "Existed_Raid", 00:10:42.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.628 "strip_size_kb": 64, 00:10:42.628 "state": "configuring", 00:10:42.628 "raid_level": "raid0", 00:10:42.628 "superblock": false, 00:10:42.628 "num_base_bdevs": 4, 00:10:42.628 "num_base_bdevs_discovered": 3, 00:10:42.628 "num_base_bdevs_operational": 4, 00:10:42.628 "base_bdevs_list": [ 00:10:42.628 { 00:10:42.628 "name": null, 00:10:42.628 "uuid": "d2b6afb3-8564-4831-bca3-2ce9e268ac70", 00:10:42.628 "is_configured": false, 00:10:42.628 "data_offset": 0, 00:10:42.628 "data_size": 65536 00:10:42.628 }, 00:10:42.628 { 00:10:42.628 "name": "BaseBdev2", 00:10:42.628 "uuid": "3d0af6f6-f33f-448f-8a2e-3f2ca0170083", 00:10:42.628 "is_configured": true, 00:10:42.628 "data_offset": 0, 00:10:42.628 "data_size": 65536 00:10:42.628 }, 00:10:42.628 { 00:10:42.628 "name": "BaseBdev3", 00:10:42.628 "uuid": "588094ca-94f4-4ddb-9e99-78f2ec69ffd9", 00:10:42.628 "is_configured": true, 00:10:42.628 "data_offset": 0, 00:10:42.628 "data_size": 65536 00:10:42.628 }, 00:10:42.628 { 00:10:42.628 "name": "BaseBdev4", 00:10:42.628 "uuid": "dc6e1634-1056-4e6a-9799-6e974b059fa3", 00:10:42.628 "is_configured": true, 00:10:42.628 "data_offset": 0, 00:10:42.628 "data_size": 65536 00:10:42.628 } 00:10:42.628 ] 00:10:42.628 }' 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.628 09:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.887 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.887 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.887 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.887 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:42.887 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.887 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:42.887 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.887 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:42.887 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.887 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.887 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.887 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d2b6afb3-8564-4831-bca3-2ce9e268ac70 00:10:42.887 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.887 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.147 [2024-11-15 09:29:31.357681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:43.147 [2024-11-15 09:29:31.357875] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:43.147 [2024-11-15 09:29:31.357892] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:43.147 [2024-11-15 09:29:31.358239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:43.147 [2024-11-15 09:29:31.358440] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:43.147 [2024-11-15 09:29:31.358457] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:43.147 [2024-11-15 09:29:31.358779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.147 NewBaseBdev 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.147 [ 00:10:43.147 { 00:10:43.147 "name": "NewBaseBdev", 00:10:43.147 "aliases": [ 00:10:43.147 "d2b6afb3-8564-4831-bca3-2ce9e268ac70" 00:10:43.147 ], 00:10:43.147 "product_name": "Malloc disk", 00:10:43.147 "block_size": 512, 00:10:43.147 "num_blocks": 65536, 00:10:43.147 "uuid": "d2b6afb3-8564-4831-bca3-2ce9e268ac70", 00:10:43.147 "assigned_rate_limits": { 00:10:43.147 "rw_ios_per_sec": 0, 00:10:43.147 "rw_mbytes_per_sec": 0, 00:10:43.147 "r_mbytes_per_sec": 0, 00:10:43.147 "w_mbytes_per_sec": 0 00:10:43.147 }, 00:10:43.147 "claimed": true, 00:10:43.147 "claim_type": "exclusive_write", 00:10:43.147 "zoned": false, 00:10:43.147 "supported_io_types": { 00:10:43.147 "read": true, 00:10:43.147 "write": true, 00:10:43.147 "unmap": true, 00:10:43.147 "flush": true, 00:10:43.147 "reset": true, 00:10:43.147 "nvme_admin": false, 00:10:43.147 "nvme_io": false, 00:10:43.147 "nvme_io_md": false, 00:10:43.147 "write_zeroes": true, 00:10:43.147 "zcopy": true, 00:10:43.147 "get_zone_info": false, 00:10:43.147 "zone_management": false, 00:10:43.147 "zone_append": false, 00:10:43.147 "compare": false, 00:10:43.147 "compare_and_write": false, 00:10:43.147 "abort": true, 00:10:43.147 "seek_hole": false, 00:10:43.147 "seek_data": false, 00:10:43.147 "copy": true, 00:10:43.147 "nvme_iov_md": false 00:10:43.147 }, 00:10:43.147 "memory_domains": [ 00:10:43.147 { 00:10:43.147 "dma_device_id": "system", 00:10:43.147 "dma_device_type": 1 00:10:43.147 }, 00:10:43.147 { 00:10:43.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.147 "dma_device_type": 2 00:10:43.147 } 00:10:43.147 ], 00:10:43.147 "driver_specific": {} 00:10:43.147 } 00:10:43.147 ] 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.147 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.147 "name": "Existed_Raid", 00:10:43.147 "uuid": "7878adba-ba80-461e-8fd0-2b337df6cc2e", 00:10:43.147 "strip_size_kb": 64, 00:10:43.147 "state": "online", 00:10:43.147 "raid_level": "raid0", 00:10:43.147 "superblock": false, 00:10:43.147 "num_base_bdevs": 4, 00:10:43.147 "num_base_bdevs_discovered": 4, 00:10:43.147 "num_base_bdevs_operational": 4, 00:10:43.147 "base_bdevs_list": [ 00:10:43.147 { 00:10:43.147 "name": "NewBaseBdev", 00:10:43.147 "uuid": "d2b6afb3-8564-4831-bca3-2ce9e268ac70", 00:10:43.147 "is_configured": true, 00:10:43.147 "data_offset": 0, 00:10:43.147 "data_size": 65536 00:10:43.147 }, 00:10:43.147 { 00:10:43.147 "name": "BaseBdev2", 00:10:43.147 "uuid": "3d0af6f6-f33f-448f-8a2e-3f2ca0170083", 00:10:43.147 "is_configured": true, 00:10:43.147 "data_offset": 0, 00:10:43.147 "data_size": 65536 00:10:43.147 }, 00:10:43.148 { 00:10:43.148 "name": "BaseBdev3", 00:10:43.148 "uuid": "588094ca-94f4-4ddb-9e99-78f2ec69ffd9", 00:10:43.148 "is_configured": true, 00:10:43.148 "data_offset": 0, 00:10:43.148 "data_size": 65536 00:10:43.148 }, 00:10:43.148 { 00:10:43.148 "name": "BaseBdev4", 00:10:43.148 "uuid": "dc6e1634-1056-4e6a-9799-6e974b059fa3", 00:10:43.148 "is_configured": true, 00:10:43.148 "data_offset": 0, 00:10:43.148 "data_size": 65536 00:10:43.148 } 00:10:43.148 ] 00:10:43.148 }' 00:10:43.148 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.148 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.407 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:43.407 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:43.407 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:43.407 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:43.407 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:43.407 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:43.407 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:43.407 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:43.407 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.407 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.407 [2024-11-15 09:29:31.857460] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.666 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.666 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:43.666 "name": "Existed_Raid", 00:10:43.666 "aliases": [ 00:10:43.666 "7878adba-ba80-461e-8fd0-2b337df6cc2e" 00:10:43.666 ], 00:10:43.666 "product_name": "Raid Volume", 00:10:43.666 "block_size": 512, 00:10:43.666 "num_blocks": 262144, 00:10:43.666 "uuid": "7878adba-ba80-461e-8fd0-2b337df6cc2e", 00:10:43.666 "assigned_rate_limits": { 00:10:43.666 "rw_ios_per_sec": 0, 00:10:43.666 "rw_mbytes_per_sec": 0, 00:10:43.666 "r_mbytes_per_sec": 0, 00:10:43.666 "w_mbytes_per_sec": 0 00:10:43.666 }, 00:10:43.666 "claimed": false, 00:10:43.666 "zoned": false, 00:10:43.666 "supported_io_types": { 00:10:43.666 "read": true, 00:10:43.666 "write": true, 00:10:43.666 "unmap": true, 00:10:43.666 "flush": true, 00:10:43.666 "reset": true, 00:10:43.666 "nvme_admin": false, 00:10:43.666 "nvme_io": false, 00:10:43.666 "nvme_io_md": false, 00:10:43.666 "write_zeroes": true, 00:10:43.666 "zcopy": false, 00:10:43.666 "get_zone_info": false, 00:10:43.666 "zone_management": false, 00:10:43.666 "zone_append": false, 00:10:43.666 "compare": false, 00:10:43.666 "compare_and_write": false, 00:10:43.666 "abort": false, 00:10:43.666 "seek_hole": false, 00:10:43.666 "seek_data": false, 00:10:43.666 "copy": false, 00:10:43.666 "nvme_iov_md": false 00:10:43.666 }, 00:10:43.666 "memory_domains": [ 00:10:43.666 { 00:10:43.666 "dma_device_id": "system", 00:10:43.666 "dma_device_type": 1 00:10:43.666 }, 00:10:43.666 { 00:10:43.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.666 "dma_device_type": 2 00:10:43.666 }, 00:10:43.666 { 00:10:43.666 "dma_device_id": "system", 00:10:43.666 "dma_device_type": 1 00:10:43.666 }, 00:10:43.666 { 00:10:43.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.666 "dma_device_type": 2 00:10:43.666 }, 00:10:43.666 { 00:10:43.666 "dma_device_id": "system", 00:10:43.666 "dma_device_type": 1 00:10:43.666 }, 00:10:43.666 { 00:10:43.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.666 "dma_device_type": 2 00:10:43.666 }, 00:10:43.666 { 00:10:43.666 "dma_device_id": "system", 00:10:43.666 "dma_device_type": 1 00:10:43.666 }, 00:10:43.666 { 00:10:43.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.666 "dma_device_type": 2 00:10:43.666 } 00:10:43.666 ], 00:10:43.666 "driver_specific": { 00:10:43.666 "raid": { 00:10:43.666 "uuid": "7878adba-ba80-461e-8fd0-2b337df6cc2e", 00:10:43.666 "strip_size_kb": 64, 00:10:43.666 "state": "online", 00:10:43.666 "raid_level": "raid0", 00:10:43.666 "superblock": false, 00:10:43.666 "num_base_bdevs": 4, 00:10:43.666 "num_base_bdevs_discovered": 4, 00:10:43.666 "num_base_bdevs_operational": 4, 00:10:43.666 "base_bdevs_list": [ 00:10:43.666 { 00:10:43.666 "name": "NewBaseBdev", 00:10:43.666 "uuid": "d2b6afb3-8564-4831-bca3-2ce9e268ac70", 00:10:43.666 "is_configured": true, 00:10:43.666 "data_offset": 0, 00:10:43.666 "data_size": 65536 00:10:43.666 }, 00:10:43.666 { 00:10:43.666 "name": "BaseBdev2", 00:10:43.666 "uuid": "3d0af6f6-f33f-448f-8a2e-3f2ca0170083", 00:10:43.666 "is_configured": true, 00:10:43.666 "data_offset": 0, 00:10:43.667 "data_size": 65536 00:10:43.667 }, 00:10:43.667 { 00:10:43.667 "name": "BaseBdev3", 00:10:43.667 "uuid": "588094ca-94f4-4ddb-9e99-78f2ec69ffd9", 00:10:43.667 "is_configured": true, 00:10:43.667 "data_offset": 0, 00:10:43.667 "data_size": 65536 00:10:43.667 }, 00:10:43.667 { 00:10:43.667 "name": "BaseBdev4", 00:10:43.667 "uuid": "dc6e1634-1056-4e6a-9799-6e974b059fa3", 00:10:43.667 "is_configured": true, 00:10:43.667 "data_offset": 0, 00:10:43.667 "data_size": 65536 00:10:43.667 } 00:10:43.667 ] 00:10:43.667 } 00:10:43.667 } 00:10:43.667 }' 00:10:43.667 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:43.667 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:43.667 BaseBdev2 00:10:43.667 BaseBdev3 00:10:43.667 BaseBdev4' 00:10:43.667 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.667 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:43.667 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.667 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.667 09:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:43.667 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.667 09:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.667 09:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.667 09:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.667 09:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.667 09:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.667 09:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:43.667 09:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.667 09:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.667 09:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.667 09:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.667 09:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.667 09:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.667 09:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.667 09:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.667 09:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:43.667 09:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.667 09:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.667 09:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.667 09:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.667 09:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.667 09:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.667 09:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:43.667 09:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.667 09:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.667 09:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.926 09:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.926 09:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.926 09:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.926 09:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:43.926 09:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.926 09:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.926 [2024-11-15 09:29:32.177033] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:43.926 [2024-11-15 09:29:32.177180] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:43.926 [2024-11-15 09:29:32.177312] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:43.926 [2024-11-15 09:29:32.177434] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:43.926 [2024-11-15 09:29:32.177489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:43.926 09:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.926 09:29:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69727 00:10:43.926 09:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 69727 ']' 00:10:43.926 09:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 69727 00:10:43.926 09:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:10:43.926 09:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:43.926 09:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69727 00:10:43.926 killing process with pid 69727 00:10:43.926 09:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:43.926 09:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:43.926 09:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69727' 00:10:43.926 09:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 69727 00:10:43.926 [2024-11-15 09:29:32.226277] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:43.926 09:29:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 69727 00:10:44.494 [2024-11-15 09:29:32.703239] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:45.873 09:29:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:45.873 00:10:45.873 real 0m11.969s 00:10:45.873 user 0m18.703s 00:10:45.873 sys 0m2.043s 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.873 ************************************ 00:10:45.873 END TEST raid_state_function_test 00:10:45.873 ************************************ 00:10:45.873 09:29:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:45.873 09:29:34 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:45.873 09:29:34 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:45.873 09:29:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:45.873 ************************************ 00:10:45.873 START TEST raid_state_function_test_sb 00:10:45.873 ************************************ 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 true 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70400 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70400' 00:10:45.873 Process raid pid: 70400 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70400 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 70400 ']' 00:10:45.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:45.873 09:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.873 [2024-11-15 09:29:34.189836] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:10:45.873 [2024-11-15 09:29:34.190000] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.132 [2024-11-15 09:29:34.357583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.132 [2024-11-15 09:29:34.492841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.401 [2024-11-15 09:29:34.730607] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.402 [2024-11-15 09:29:34.730659] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.662 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:46.662 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:10:46.662 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:46.662 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.662 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.662 [2024-11-15 09:29:35.089373] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:46.662 [2024-11-15 09:29:35.089539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:46.662 [2024-11-15 09:29:35.089577] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:46.662 [2024-11-15 09:29:35.089613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:46.662 [2024-11-15 09:29:35.089641] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:46.662 [2024-11-15 09:29:35.089665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:46.662 [2024-11-15 09:29:35.089694] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:46.662 [2024-11-15 09:29:35.089724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:46.662 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.662 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.662 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.662 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.662 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.662 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.662 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.662 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.662 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.662 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.662 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.662 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.662 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.662 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.662 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.662 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.922 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.922 "name": "Existed_Raid", 00:10:46.922 "uuid": "93799b7f-ddd4-4ab8-b9e2-ec022a31f32d", 00:10:46.922 "strip_size_kb": 64, 00:10:46.922 "state": "configuring", 00:10:46.922 "raid_level": "raid0", 00:10:46.922 "superblock": true, 00:10:46.922 "num_base_bdevs": 4, 00:10:46.922 "num_base_bdevs_discovered": 0, 00:10:46.922 "num_base_bdevs_operational": 4, 00:10:46.922 "base_bdevs_list": [ 00:10:46.922 { 00:10:46.922 "name": "BaseBdev1", 00:10:46.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.922 "is_configured": false, 00:10:46.922 "data_offset": 0, 00:10:46.922 "data_size": 0 00:10:46.922 }, 00:10:46.922 { 00:10:46.922 "name": "BaseBdev2", 00:10:46.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.922 "is_configured": false, 00:10:46.922 "data_offset": 0, 00:10:46.922 "data_size": 0 00:10:46.922 }, 00:10:46.922 { 00:10:46.922 "name": "BaseBdev3", 00:10:46.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.922 "is_configured": false, 00:10:46.922 "data_offset": 0, 00:10:46.922 "data_size": 0 00:10:46.922 }, 00:10:46.922 { 00:10:46.922 "name": "BaseBdev4", 00:10:46.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.922 "is_configured": false, 00:10:46.922 "data_offset": 0, 00:10:46.922 "data_size": 0 00:10:46.922 } 00:10:46.922 ] 00:10:46.922 }' 00:10:46.922 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.922 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.182 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:47.182 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.182 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.182 [2024-11-15 09:29:35.548523] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:47.182 [2024-11-15 09:29:35.548669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:47.182 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.182 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:47.182 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.182 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.182 [2024-11-15 09:29:35.560476] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:47.182 [2024-11-15 09:29:35.560592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:47.182 [2024-11-15 09:29:35.560624] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:47.182 [2024-11-15 09:29:35.560646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:47.182 [2024-11-15 09:29:35.560673] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:47.182 [2024-11-15 09:29:35.560696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:47.182 [2024-11-15 09:29:35.560770] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:47.182 [2024-11-15 09:29:35.560792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:47.182 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.182 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:47.182 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.182 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.182 [2024-11-15 09:29:35.609655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.182 BaseBdev1 00:10:47.182 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.182 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:47.182 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:47.182 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:47.182 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:47.182 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:47.182 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:47.182 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:47.182 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.182 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.182 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.182 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:47.182 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.182 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.182 [ 00:10:47.182 { 00:10:47.182 "name": "BaseBdev1", 00:10:47.182 "aliases": [ 00:10:47.182 "ecc1b7c5-ed64-4528-8ca2-24b9ef73a6a5" 00:10:47.182 ], 00:10:47.182 "product_name": "Malloc disk", 00:10:47.182 "block_size": 512, 00:10:47.182 "num_blocks": 65536, 00:10:47.182 "uuid": "ecc1b7c5-ed64-4528-8ca2-24b9ef73a6a5", 00:10:47.182 "assigned_rate_limits": { 00:10:47.182 "rw_ios_per_sec": 0, 00:10:47.182 "rw_mbytes_per_sec": 0, 00:10:47.182 "r_mbytes_per_sec": 0, 00:10:47.182 "w_mbytes_per_sec": 0 00:10:47.182 }, 00:10:47.182 "claimed": true, 00:10:47.182 "claim_type": "exclusive_write", 00:10:47.182 "zoned": false, 00:10:47.182 "supported_io_types": { 00:10:47.182 "read": true, 00:10:47.182 "write": true, 00:10:47.182 "unmap": true, 00:10:47.182 "flush": true, 00:10:47.182 "reset": true, 00:10:47.182 "nvme_admin": false, 00:10:47.182 "nvme_io": false, 00:10:47.182 "nvme_io_md": false, 00:10:47.182 "write_zeroes": true, 00:10:47.182 "zcopy": true, 00:10:47.182 "get_zone_info": false, 00:10:47.182 "zone_management": false, 00:10:47.182 "zone_append": false, 00:10:47.182 "compare": false, 00:10:47.182 "compare_and_write": false, 00:10:47.182 "abort": true, 00:10:47.182 "seek_hole": false, 00:10:47.441 "seek_data": false, 00:10:47.441 "copy": true, 00:10:47.441 "nvme_iov_md": false 00:10:47.441 }, 00:10:47.441 "memory_domains": [ 00:10:47.441 { 00:10:47.441 "dma_device_id": "system", 00:10:47.441 "dma_device_type": 1 00:10:47.441 }, 00:10:47.441 { 00:10:47.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.441 "dma_device_type": 2 00:10:47.441 } 00:10:47.441 ], 00:10:47.441 "driver_specific": {} 00:10:47.441 } 00:10:47.441 ] 00:10:47.441 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.441 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:47.441 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:47.441 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.441 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.441 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.441 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.441 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.441 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.441 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.441 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.441 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.441 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.441 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.441 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.441 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.441 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.441 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.441 "name": "Existed_Raid", 00:10:47.441 "uuid": "1db73a2d-d30d-4e96-a3fd-b1e8671372e3", 00:10:47.441 "strip_size_kb": 64, 00:10:47.441 "state": "configuring", 00:10:47.441 "raid_level": "raid0", 00:10:47.441 "superblock": true, 00:10:47.441 "num_base_bdevs": 4, 00:10:47.441 "num_base_bdevs_discovered": 1, 00:10:47.441 "num_base_bdevs_operational": 4, 00:10:47.441 "base_bdevs_list": [ 00:10:47.441 { 00:10:47.441 "name": "BaseBdev1", 00:10:47.441 "uuid": "ecc1b7c5-ed64-4528-8ca2-24b9ef73a6a5", 00:10:47.441 "is_configured": true, 00:10:47.441 "data_offset": 2048, 00:10:47.441 "data_size": 63488 00:10:47.441 }, 00:10:47.441 { 00:10:47.441 "name": "BaseBdev2", 00:10:47.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.441 "is_configured": false, 00:10:47.441 "data_offset": 0, 00:10:47.441 "data_size": 0 00:10:47.441 }, 00:10:47.441 { 00:10:47.441 "name": "BaseBdev3", 00:10:47.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.441 "is_configured": false, 00:10:47.441 "data_offset": 0, 00:10:47.441 "data_size": 0 00:10:47.441 }, 00:10:47.441 { 00:10:47.441 "name": "BaseBdev4", 00:10:47.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.441 "is_configured": false, 00:10:47.441 "data_offset": 0, 00:10:47.441 "data_size": 0 00:10:47.441 } 00:10:47.441 ] 00:10:47.441 }' 00:10:47.441 09:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.441 09:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.701 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:47.701 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.701 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.701 [2024-11-15 09:29:36.128897] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:47.701 [2024-11-15 09:29:36.129069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:47.701 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.701 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:47.701 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.701 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.701 [2024-11-15 09:29:36.140949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.701 [2024-11-15 09:29:36.143175] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:47.701 [2024-11-15 09:29:36.143270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:47.701 [2024-11-15 09:29:36.143311] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:47.701 [2024-11-15 09:29:36.143340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:47.701 [2024-11-15 09:29:36.143386] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:47.701 [2024-11-15 09:29:36.143420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:47.701 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.701 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:47.701 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:47.701 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:47.701 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.701 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.701 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.701 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.701 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.701 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.701 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.701 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.701 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.701 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.701 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.702 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.702 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.005 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.005 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.005 "name": "Existed_Raid", 00:10:48.005 "uuid": "64acfe48-444c-4c69-af5a-5d2cef28f909", 00:10:48.005 "strip_size_kb": 64, 00:10:48.005 "state": "configuring", 00:10:48.005 "raid_level": "raid0", 00:10:48.005 "superblock": true, 00:10:48.005 "num_base_bdevs": 4, 00:10:48.005 "num_base_bdevs_discovered": 1, 00:10:48.005 "num_base_bdevs_operational": 4, 00:10:48.005 "base_bdevs_list": [ 00:10:48.005 { 00:10:48.005 "name": "BaseBdev1", 00:10:48.005 "uuid": "ecc1b7c5-ed64-4528-8ca2-24b9ef73a6a5", 00:10:48.005 "is_configured": true, 00:10:48.005 "data_offset": 2048, 00:10:48.005 "data_size": 63488 00:10:48.005 }, 00:10:48.005 { 00:10:48.005 "name": "BaseBdev2", 00:10:48.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.005 "is_configured": false, 00:10:48.005 "data_offset": 0, 00:10:48.005 "data_size": 0 00:10:48.005 }, 00:10:48.005 { 00:10:48.005 "name": "BaseBdev3", 00:10:48.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.005 "is_configured": false, 00:10:48.005 "data_offset": 0, 00:10:48.005 "data_size": 0 00:10:48.005 }, 00:10:48.005 { 00:10:48.005 "name": "BaseBdev4", 00:10:48.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.005 "is_configured": false, 00:10:48.005 "data_offset": 0, 00:10:48.005 "data_size": 0 00:10:48.005 } 00:10:48.005 ] 00:10:48.005 }' 00:10:48.005 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.005 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.284 [2024-11-15 09:29:36.652720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:48.284 BaseBdev2 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.284 [ 00:10:48.284 { 00:10:48.284 "name": "BaseBdev2", 00:10:48.284 "aliases": [ 00:10:48.284 "8c34ae9e-a8c4-4ae7-a9db-1ec157a13e7c" 00:10:48.284 ], 00:10:48.284 "product_name": "Malloc disk", 00:10:48.284 "block_size": 512, 00:10:48.284 "num_blocks": 65536, 00:10:48.284 "uuid": "8c34ae9e-a8c4-4ae7-a9db-1ec157a13e7c", 00:10:48.284 "assigned_rate_limits": { 00:10:48.284 "rw_ios_per_sec": 0, 00:10:48.284 "rw_mbytes_per_sec": 0, 00:10:48.284 "r_mbytes_per_sec": 0, 00:10:48.284 "w_mbytes_per_sec": 0 00:10:48.284 }, 00:10:48.284 "claimed": true, 00:10:48.284 "claim_type": "exclusive_write", 00:10:48.284 "zoned": false, 00:10:48.284 "supported_io_types": { 00:10:48.284 "read": true, 00:10:48.284 "write": true, 00:10:48.284 "unmap": true, 00:10:48.284 "flush": true, 00:10:48.284 "reset": true, 00:10:48.284 "nvme_admin": false, 00:10:48.284 "nvme_io": false, 00:10:48.284 "nvme_io_md": false, 00:10:48.284 "write_zeroes": true, 00:10:48.284 "zcopy": true, 00:10:48.284 "get_zone_info": false, 00:10:48.284 "zone_management": false, 00:10:48.284 "zone_append": false, 00:10:48.284 "compare": false, 00:10:48.284 "compare_and_write": false, 00:10:48.284 "abort": true, 00:10:48.284 "seek_hole": false, 00:10:48.284 "seek_data": false, 00:10:48.284 "copy": true, 00:10:48.284 "nvme_iov_md": false 00:10:48.284 }, 00:10:48.284 "memory_domains": [ 00:10:48.284 { 00:10:48.284 "dma_device_id": "system", 00:10:48.284 "dma_device_type": 1 00:10:48.284 }, 00:10:48.284 { 00:10:48.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.284 "dma_device_type": 2 00:10:48.284 } 00:10:48.284 ], 00:10:48.284 "driver_specific": {} 00:10:48.284 } 00:10:48.284 ] 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.284 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.543 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.543 "name": "Existed_Raid", 00:10:48.543 "uuid": "64acfe48-444c-4c69-af5a-5d2cef28f909", 00:10:48.543 "strip_size_kb": 64, 00:10:48.543 "state": "configuring", 00:10:48.543 "raid_level": "raid0", 00:10:48.543 "superblock": true, 00:10:48.543 "num_base_bdevs": 4, 00:10:48.543 "num_base_bdevs_discovered": 2, 00:10:48.543 "num_base_bdevs_operational": 4, 00:10:48.543 "base_bdevs_list": [ 00:10:48.543 { 00:10:48.543 "name": "BaseBdev1", 00:10:48.543 "uuid": "ecc1b7c5-ed64-4528-8ca2-24b9ef73a6a5", 00:10:48.543 "is_configured": true, 00:10:48.543 "data_offset": 2048, 00:10:48.543 "data_size": 63488 00:10:48.543 }, 00:10:48.543 { 00:10:48.543 "name": "BaseBdev2", 00:10:48.543 "uuid": "8c34ae9e-a8c4-4ae7-a9db-1ec157a13e7c", 00:10:48.543 "is_configured": true, 00:10:48.543 "data_offset": 2048, 00:10:48.543 "data_size": 63488 00:10:48.543 }, 00:10:48.543 { 00:10:48.543 "name": "BaseBdev3", 00:10:48.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.543 "is_configured": false, 00:10:48.543 "data_offset": 0, 00:10:48.543 "data_size": 0 00:10:48.543 }, 00:10:48.543 { 00:10:48.543 "name": "BaseBdev4", 00:10:48.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.543 "is_configured": false, 00:10:48.543 "data_offset": 0, 00:10:48.543 "data_size": 0 00:10:48.543 } 00:10:48.543 ] 00:10:48.543 }' 00:10:48.543 09:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.543 09:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.803 [2024-11-15 09:29:37.189734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:48.803 BaseBdev3 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.803 [ 00:10:48.803 { 00:10:48.803 "name": "BaseBdev3", 00:10:48.803 "aliases": [ 00:10:48.803 "b553a2b6-8166-40df-8efc-4874b84046ea" 00:10:48.803 ], 00:10:48.803 "product_name": "Malloc disk", 00:10:48.803 "block_size": 512, 00:10:48.803 "num_blocks": 65536, 00:10:48.803 "uuid": "b553a2b6-8166-40df-8efc-4874b84046ea", 00:10:48.803 "assigned_rate_limits": { 00:10:48.803 "rw_ios_per_sec": 0, 00:10:48.803 "rw_mbytes_per_sec": 0, 00:10:48.803 "r_mbytes_per_sec": 0, 00:10:48.803 "w_mbytes_per_sec": 0 00:10:48.803 }, 00:10:48.803 "claimed": true, 00:10:48.803 "claim_type": "exclusive_write", 00:10:48.803 "zoned": false, 00:10:48.803 "supported_io_types": { 00:10:48.803 "read": true, 00:10:48.803 "write": true, 00:10:48.803 "unmap": true, 00:10:48.803 "flush": true, 00:10:48.803 "reset": true, 00:10:48.803 "nvme_admin": false, 00:10:48.803 "nvme_io": false, 00:10:48.803 "nvme_io_md": false, 00:10:48.803 "write_zeroes": true, 00:10:48.803 "zcopy": true, 00:10:48.803 "get_zone_info": false, 00:10:48.803 "zone_management": false, 00:10:48.803 "zone_append": false, 00:10:48.803 "compare": false, 00:10:48.803 "compare_and_write": false, 00:10:48.803 "abort": true, 00:10:48.803 "seek_hole": false, 00:10:48.803 "seek_data": false, 00:10:48.803 "copy": true, 00:10:48.803 "nvme_iov_md": false 00:10:48.803 }, 00:10:48.803 "memory_domains": [ 00:10:48.803 { 00:10:48.803 "dma_device_id": "system", 00:10:48.803 "dma_device_type": 1 00:10:48.803 }, 00:10:48.803 { 00:10:48.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.803 "dma_device_type": 2 00:10:48.803 } 00:10:48.803 ], 00:10:48.803 "driver_specific": {} 00:10:48.803 } 00:10:48.803 ] 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.803 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.063 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.063 "name": "Existed_Raid", 00:10:49.063 "uuid": "64acfe48-444c-4c69-af5a-5d2cef28f909", 00:10:49.063 "strip_size_kb": 64, 00:10:49.063 "state": "configuring", 00:10:49.063 "raid_level": "raid0", 00:10:49.063 "superblock": true, 00:10:49.063 "num_base_bdevs": 4, 00:10:49.063 "num_base_bdevs_discovered": 3, 00:10:49.063 "num_base_bdevs_operational": 4, 00:10:49.063 "base_bdevs_list": [ 00:10:49.063 { 00:10:49.063 "name": "BaseBdev1", 00:10:49.063 "uuid": "ecc1b7c5-ed64-4528-8ca2-24b9ef73a6a5", 00:10:49.063 "is_configured": true, 00:10:49.063 "data_offset": 2048, 00:10:49.063 "data_size": 63488 00:10:49.063 }, 00:10:49.063 { 00:10:49.063 "name": "BaseBdev2", 00:10:49.063 "uuid": "8c34ae9e-a8c4-4ae7-a9db-1ec157a13e7c", 00:10:49.063 "is_configured": true, 00:10:49.063 "data_offset": 2048, 00:10:49.063 "data_size": 63488 00:10:49.063 }, 00:10:49.063 { 00:10:49.063 "name": "BaseBdev3", 00:10:49.063 "uuid": "b553a2b6-8166-40df-8efc-4874b84046ea", 00:10:49.063 "is_configured": true, 00:10:49.063 "data_offset": 2048, 00:10:49.063 "data_size": 63488 00:10:49.063 }, 00:10:49.063 { 00:10:49.063 "name": "BaseBdev4", 00:10:49.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.063 "is_configured": false, 00:10:49.063 "data_offset": 0, 00:10:49.063 "data_size": 0 00:10:49.063 } 00:10:49.063 ] 00:10:49.063 }' 00:10:49.063 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.063 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.322 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:49.322 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.322 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.322 [2024-11-15 09:29:37.750114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:49.322 [2024-11-15 09:29:37.750510] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:49.322 [2024-11-15 09:29:37.750569] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:49.322 [2024-11-15 09:29:37.750906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:49.322 BaseBdev4 00:10:49.322 [2024-11-15 09:29:37.751122] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:49.322 [2024-11-15 09:29:37.751161] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:49.322 [2024-11-15 09:29:37.751322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.322 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.322 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:49.322 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:10:49.322 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:49.322 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:49.322 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:49.322 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:49.322 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:49.323 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.323 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.323 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.323 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:49.323 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.323 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.323 [ 00:10:49.323 { 00:10:49.323 "name": "BaseBdev4", 00:10:49.323 "aliases": [ 00:10:49.323 "a1d0856a-ee9e-4677-a295-8924a2ef85c1" 00:10:49.323 ], 00:10:49.323 "product_name": "Malloc disk", 00:10:49.323 "block_size": 512, 00:10:49.323 "num_blocks": 65536, 00:10:49.323 "uuid": "a1d0856a-ee9e-4677-a295-8924a2ef85c1", 00:10:49.323 "assigned_rate_limits": { 00:10:49.323 "rw_ios_per_sec": 0, 00:10:49.323 "rw_mbytes_per_sec": 0, 00:10:49.323 "r_mbytes_per_sec": 0, 00:10:49.323 "w_mbytes_per_sec": 0 00:10:49.323 }, 00:10:49.323 "claimed": true, 00:10:49.323 "claim_type": "exclusive_write", 00:10:49.323 "zoned": false, 00:10:49.323 "supported_io_types": { 00:10:49.323 "read": true, 00:10:49.323 "write": true, 00:10:49.323 "unmap": true, 00:10:49.323 "flush": true, 00:10:49.323 "reset": true, 00:10:49.323 "nvme_admin": false, 00:10:49.323 "nvme_io": false, 00:10:49.323 "nvme_io_md": false, 00:10:49.323 "write_zeroes": true, 00:10:49.323 "zcopy": true, 00:10:49.323 "get_zone_info": false, 00:10:49.323 "zone_management": false, 00:10:49.323 "zone_append": false, 00:10:49.323 "compare": false, 00:10:49.323 "compare_and_write": false, 00:10:49.323 "abort": true, 00:10:49.323 "seek_hole": false, 00:10:49.323 "seek_data": false, 00:10:49.323 "copy": true, 00:10:49.323 "nvme_iov_md": false 00:10:49.323 }, 00:10:49.323 "memory_domains": [ 00:10:49.323 { 00:10:49.323 "dma_device_id": "system", 00:10:49.323 "dma_device_type": 1 00:10:49.323 }, 00:10:49.323 { 00:10:49.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.323 "dma_device_type": 2 00:10:49.323 } 00:10:49.323 ], 00:10:49.583 "driver_specific": {} 00:10:49.583 } 00:10:49.583 ] 00:10:49.583 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.583 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:49.583 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:49.583 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:49.583 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:49.583 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.583 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.583 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.583 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.583 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.583 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.583 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.583 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.583 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.583 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.583 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.583 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.583 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.583 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.583 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.583 "name": "Existed_Raid", 00:10:49.583 "uuid": "64acfe48-444c-4c69-af5a-5d2cef28f909", 00:10:49.583 "strip_size_kb": 64, 00:10:49.583 "state": "online", 00:10:49.583 "raid_level": "raid0", 00:10:49.583 "superblock": true, 00:10:49.583 "num_base_bdevs": 4, 00:10:49.583 "num_base_bdevs_discovered": 4, 00:10:49.583 "num_base_bdevs_operational": 4, 00:10:49.583 "base_bdevs_list": [ 00:10:49.583 { 00:10:49.583 "name": "BaseBdev1", 00:10:49.583 "uuid": "ecc1b7c5-ed64-4528-8ca2-24b9ef73a6a5", 00:10:49.583 "is_configured": true, 00:10:49.583 "data_offset": 2048, 00:10:49.583 "data_size": 63488 00:10:49.583 }, 00:10:49.583 { 00:10:49.583 "name": "BaseBdev2", 00:10:49.583 "uuid": "8c34ae9e-a8c4-4ae7-a9db-1ec157a13e7c", 00:10:49.583 "is_configured": true, 00:10:49.583 "data_offset": 2048, 00:10:49.583 "data_size": 63488 00:10:49.583 }, 00:10:49.583 { 00:10:49.583 "name": "BaseBdev3", 00:10:49.583 "uuid": "b553a2b6-8166-40df-8efc-4874b84046ea", 00:10:49.583 "is_configured": true, 00:10:49.583 "data_offset": 2048, 00:10:49.583 "data_size": 63488 00:10:49.583 }, 00:10:49.583 { 00:10:49.583 "name": "BaseBdev4", 00:10:49.583 "uuid": "a1d0856a-ee9e-4677-a295-8924a2ef85c1", 00:10:49.583 "is_configured": true, 00:10:49.583 "data_offset": 2048, 00:10:49.583 "data_size": 63488 00:10:49.583 } 00:10:49.583 ] 00:10:49.583 }' 00:10:49.583 09:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.583 09:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.842 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:49.843 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:49.843 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:49.843 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:49.843 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:49.843 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:49.843 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:49.843 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:49.843 09:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.843 09:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.843 [2024-11-15 09:29:38.305727] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.102 09:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.102 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:50.102 "name": "Existed_Raid", 00:10:50.102 "aliases": [ 00:10:50.102 "64acfe48-444c-4c69-af5a-5d2cef28f909" 00:10:50.102 ], 00:10:50.102 "product_name": "Raid Volume", 00:10:50.102 "block_size": 512, 00:10:50.102 "num_blocks": 253952, 00:10:50.102 "uuid": "64acfe48-444c-4c69-af5a-5d2cef28f909", 00:10:50.102 "assigned_rate_limits": { 00:10:50.102 "rw_ios_per_sec": 0, 00:10:50.102 "rw_mbytes_per_sec": 0, 00:10:50.102 "r_mbytes_per_sec": 0, 00:10:50.102 "w_mbytes_per_sec": 0 00:10:50.102 }, 00:10:50.102 "claimed": false, 00:10:50.102 "zoned": false, 00:10:50.102 "supported_io_types": { 00:10:50.102 "read": true, 00:10:50.102 "write": true, 00:10:50.102 "unmap": true, 00:10:50.102 "flush": true, 00:10:50.102 "reset": true, 00:10:50.102 "nvme_admin": false, 00:10:50.102 "nvme_io": false, 00:10:50.102 "nvme_io_md": false, 00:10:50.102 "write_zeroes": true, 00:10:50.102 "zcopy": false, 00:10:50.102 "get_zone_info": false, 00:10:50.102 "zone_management": false, 00:10:50.102 "zone_append": false, 00:10:50.102 "compare": false, 00:10:50.102 "compare_and_write": false, 00:10:50.102 "abort": false, 00:10:50.102 "seek_hole": false, 00:10:50.102 "seek_data": false, 00:10:50.102 "copy": false, 00:10:50.102 "nvme_iov_md": false 00:10:50.102 }, 00:10:50.102 "memory_domains": [ 00:10:50.102 { 00:10:50.102 "dma_device_id": "system", 00:10:50.102 "dma_device_type": 1 00:10:50.102 }, 00:10:50.102 { 00:10:50.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.102 "dma_device_type": 2 00:10:50.102 }, 00:10:50.102 { 00:10:50.102 "dma_device_id": "system", 00:10:50.102 "dma_device_type": 1 00:10:50.102 }, 00:10:50.102 { 00:10:50.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.102 "dma_device_type": 2 00:10:50.102 }, 00:10:50.102 { 00:10:50.102 "dma_device_id": "system", 00:10:50.103 "dma_device_type": 1 00:10:50.103 }, 00:10:50.103 { 00:10:50.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.103 "dma_device_type": 2 00:10:50.103 }, 00:10:50.103 { 00:10:50.103 "dma_device_id": "system", 00:10:50.103 "dma_device_type": 1 00:10:50.103 }, 00:10:50.103 { 00:10:50.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.103 "dma_device_type": 2 00:10:50.103 } 00:10:50.103 ], 00:10:50.103 "driver_specific": { 00:10:50.103 "raid": { 00:10:50.103 "uuid": "64acfe48-444c-4c69-af5a-5d2cef28f909", 00:10:50.103 "strip_size_kb": 64, 00:10:50.103 "state": "online", 00:10:50.103 "raid_level": "raid0", 00:10:50.103 "superblock": true, 00:10:50.103 "num_base_bdevs": 4, 00:10:50.103 "num_base_bdevs_discovered": 4, 00:10:50.103 "num_base_bdevs_operational": 4, 00:10:50.103 "base_bdevs_list": [ 00:10:50.103 { 00:10:50.103 "name": "BaseBdev1", 00:10:50.103 "uuid": "ecc1b7c5-ed64-4528-8ca2-24b9ef73a6a5", 00:10:50.103 "is_configured": true, 00:10:50.103 "data_offset": 2048, 00:10:50.103 "data_size": 63488 00:10:50.103 }, 00:10:50.103 { 00:10:50.103 "name": "BaseBdev2", 00:10:50.103 "uuid": "8c34ae9e-a8c4-4ae7-a9db-1ec157a13e7c", 00:10:50.103 "is_configured": true, 00:10:50.103 "data_offset": 2048, 00:10:50.103 "data_size": 63488 00:10:50.103 }, 00:10:50.103 { 00:10:50.103 "name": "BaseBdev3", 00:10:50.103 "uuid": "b553a2b6-8166-40df-8efc-4874b84046ea", 00:10:50.103 "is_configured": true, 00:10:50.103 "data_offset": 2048, 00:10:50.103 "data_size": 63488 00:10:50.103 }, 00:10:50.103 { 00:10:50.103 "name": "BaseBdev4", 00:10:50.103 "uuid": "a1d0856a-ee9e-4677-a295-8924a2ef85c1", 00:10:50.103 "is_configured": true, 00:10:50.103 "data_offset": 2048, 00:10:50.103 "data_size": 63488 00:10:50.103 } 00:10:50.103 ] 00:10:50.103 } 00:10:50.103 } 00:10:50.103 }' 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:50.103 BaseBdev2 00:10:50.103 BaseBdev3 00:10:50.103 BaseBdev4' 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.103 09:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.364 [2024-11-15 09:29:38.616930] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:50.364 [2024-11-15 09:29:38.616983] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:50.364 [2024-11-15 09:29:38.617044] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.364 "name": "Existed_Raid", 00:10:50.364 "uuid": "64acfe48-444c-4c69-af5a-5d2cef28f909", 00:10:50.364 "strip_size_kb": 64, 00:10:50.364 "state": "offline", 00:10:50.364 "raid_level": "raid0", 00:10:50.364 "superblock": true, 00:10:50.364 "num_base_bdevs": 4, 00:10:50.364 "num_base_bdevs_discovered": 3, 00:10:50.364 "num_base_bdevs_operational": 3, 00:10:50.364 "base_bdevs_list": [ 00:10:50.364 { 00:10:50.364 "name": null, 00:10:50.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.364 "is_configured": false, 00:10:50.364 "data_offset": 0, 00:10:50.364 "data_size": 63488 00:10:50.364 }, 00:10:50.364 { 00:10:50.364 "name": "BaseBdev2", 00:10:50.364 "uuid": "8c34ae9e-a8c4-4ae7-a9db-1ec157a13e7c", 00:10:50.364 "is_configured": true, 00:10:50.364 "data_offset": 2048, 00:10:50.364 "data_size": 63488 00:10:50.364 }, 00:10:50.364 { 00:10:50.364 "name": "BaseBdev3", 00:10:50.364 "uuid": "b553a2b6-8166-40df-8efc-4874b84046ea", 00:10:50.364 "is_configured": true, 00:10:50.364 "data_offset": 2048, 00:10:50.364 "data_size": 63488 00:10:50.364 }, 00:10:50.364 { 00:10:50.364 "name": "BaseBdev4", 00:10:50.364 "uuid": "a1d0856a-ee9e-4677-a295-8924a2ef85c1", 00:10:50.364 "is_configured": true, 00:10:50.364 "data_offset": 2048, 00:10:50.364 "data_size": 63488 00:10:50.364 } 00:10:50.364 ] 00:10:50.364 }' 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.364 09:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.935 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:50.935 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:50.935 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:50.935 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.935 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.935 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.935 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.935 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:50.935 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:50.935 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:50.935 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.935 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.935 [2024-11-15 09:29:39.272551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:50.935 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.935 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:50.935 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:50.935 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.935 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:50.935 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.935 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.195 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.195 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:51.195 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:51.195 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:51.195 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.195 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.195 [2024-11-15 09:29:39.441386] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:51.195 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.195 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:51.195 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:51.195 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.195 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.195 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.195 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:51.195 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.195 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:51.195 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:51.195 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:51.195 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.195 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.195 [2024-11-15 09:29:39.609839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:51.195 [2024-11-15 09:29:39.610029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.456 BaseBdev2 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.456 [ 00:10:51.456 { 00:10:51.456 "name": "BaseBdev2", 00:10:51.456 "aliases": [ 00:10:51.456 "dd9c66c0-ddcf-4279-b3e2-09f5d55819e9" 00:10:51.456 ], 00:10:51.456 "product_name": "Malloc disk", 00:10:51.456 "block_size": 512, 00:10:51.456 "num_blocks": 65536, 00:10:51.456 "uuid": "dd9c66c0-ddcf-4279-b3e2-09f5d55819e9", 00:10:51.456 "assigned_rate_limits": { 00:10:51.456 "rw_ios_per_sec": 0, 00:10:51.456 "rw_mbytes_per_sec": 0, 00:10:51.456 "r_mbytes_per_sec": 0, 00:10:51.456 "w_mbytes_per_sec": 0 00:10:51.456 }, 00:10:51.456 "claimed": false, 00:10:51.456 "zoned": false, 00:10:51.456 "supported_io_types": { 00:10:51.456 "read": true, 00:10:51.456 "write": true, 00:10:51.456 "unmap": true, 00:10:51.456 "flush": true, 00:10:51.456 "reset": true, 00:10:51.456 "nvme_admin": false, 00:10:51.456 "nvme_io": false, 00:10:51.456 "nvme_io_md": false, 00:10:51.456 "write_zeroes": true, 00:10:51.456 "zcopy": true, 00:10:51.456 "get_zone_info": false, 00:10:51.456 "zone_management": false, 00:10:51.456 "zone_append": false, 00:10:51.456 "compare": false, 00:10:51.456 "compare_and_write": false, 00:10:51.456 "abort": true, 00:10:51.456 "seek_hole": false, 00:10:51.456 "seek_data": false, 00:10:51.456 "copy": true, 00:10:51.456 "nvme_iov_md": false 00:10:51.456 }, 00:10:51.456 "memory_domains": [ 00:10:51.456 { 00:10:51.456 "dma_device_id": "system", 00:10:51.456 "dma_device_type": 1 00:10:51.456 }, 00:10:51.456 { 00:10:51.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.456 "dma_device_type": 2 00:10:51.456 } 00:10:51.456 ], 00:10:51.456 "driver_specific": {} 00:10:51.456 } 00:10:51.456 ] 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.456 BaseBdev3 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.456 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.716 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.716 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:51.716 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.716 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.716 [ 00:10:51.716 { 00:10:51.716 "name": "BaseBdev3", 00:10:51.716 "aliases": [ 00:10:51.716 "9edacf1a-d869-4420-982d-010b27459400" 00:10:51.716 ], 00:10:51.716 "product_name": "Malloc disk", 00:10:51.716 "block_size": 512, 00:10:51.716 "num_blocks": 65536, 00:10:51.716 "uuid": "9edacf1a-d869-4420-982d-010b27459400", 00:10:51.716 "assigned_rate_limits": { 00:10:51.716 "rw_ios_per_sec": 0, 00:10:51.716 "rw_mbytes_per_sec": 0, 00:10:51.716 "r_mbytes_per_sec": 0, 00:10:51.716 "w_mbytes_per_sec": 0 00:10:51.716 }, 00:10:51.716 "claimed": false, 00:10:51.716 "zoned": false, 00:10:51.716 "supported_io_types": { 00:10:51.716 "read": true, 00:10:51.716 "write": true, 00:10:51.716 "unmap": true, 00:10:51.716 "flush": true, 00:10:51.716 "reset": true, 00:10:51.716 "nvme_admin": false, 00:10:51.716 "nvme_io": false, 00:10:51.716 "nvme_io_md": false, 00:10:51.716 "write_zeroes": true, 00:10:51.716 "zcopy": true, 00:10:51.716 "get_zone_info": false, 00:10:51.716 "zone_management": false, 00:10:51.716 "zone_append": false, 00:10:51.716 "compare": false, 00:10:51.716 "compare_and_write": false, 00:10:51.716 "abort": true, 00:10:51.716 "seek_hole": false, 00:10:51.716 "seek_data": false, 00:10:51.716 "copy": true, 00:10:51.716 "nvme_iov_md": false 00:10:51.716 }, 00:10:51.716 "memory_domains": [ 00:10:51.716 { 00:10:51.716 "dma_device_id": "system", 00:10:51.716 "dma_device_type": 1 00:10:51.716 }, 00:10:51.716 { 00:10:51.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.716 "dma_device_type": 2 00:10:51.716 } 00:10:51.716 ], 00:10:51.717 "driver_specific": {} 00:10:51.717 } 00:10:51.717 ] 00:10:51.717 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.717 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:51.717 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:51.717 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:51.717 09:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:51.717 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.717 09:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.717 BaseBdev4 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.717 [ 00:10:51.717 { 00:10:51.717 "name": "BaseBdev4", 00:10:51.717 "aliases": [ 00:10:51.717 "bfb7f788-cfdb-4e6e-8006-4e7043628687" 00:10:51.717 ], 00:10:51.717 "product_name": "Malloc disk", 00:10:51.717 "block_size": 512, 00:10:51.717 "num_blocks": 65536, 00:10:51.717 "uuid": "bfb7f788-cfdb-4e6e-8006-4e7043628687", 00:10:51.717 "assigned_rate_limits": { 00:10:51.717 "rw_ios_per_sec": 0, 00:10:51.717 "rw_mbytes_per_sec": 0, 00:10:51.717 "r_mbytes_per_sec": 0, 00:10:51.717 "w_mbytes_per_sec": 0 00:10:51.717 }, 00:10:51.717 "claimed": false, 00:10:51.717 "zoned": false, 00:10:51.717 "supported_io_types": { 00:10:51.717 "read": true, 00:10:51.717 "write": true, 00:10:51.717 "unmap": true, 00:10:51.717 "flush": true, 00:10:51.717 "reset": true, 00:10:51.717 "nvme_admin": false, 00:10:51.717 "nvme_io": false, 00:10:51.717 "nvme_io_md": false, 00:10:51.717 "write_zeroes": true, 00:10:51.717 "zcopy": true, 00:10:51.717 "get_zone_info": false, 00:10:51.717 "zone_management": false, 00:10:51.717 "zone_append": false, 00:10:51.717 "compare": false, 00:10:51.717 "compare_and_write": false, 00:10:51.717 "abort": true, 00:10:51.717 "seek_hole": false, 00:10:51.717 "seek_data": false, 00:10:51.717 "copy": true, 00:10:51.717 "nvme_iov_md": false 00:10:51.717 }, 00:10:51.717 "memory_domains": [ 00:10:51.717 { 00:10:51.717 "dma_device_id": "system", 00:10:51.717 "dma_device_type": 1 00:10:51.717 }, 00:10:51.717 { 00:10:51.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.717 "dma_device_type": 2 00:10:51.717 } 00:10:51.717 ], 00:10:51.717 "driver_specific": {} 00:10:51.717 } 00:10:51.717 ] 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.717 [2024-11-15 09:29:40.050736] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:51.717 [2024-11-15 09:29:40.050916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:51.717 [2024-11-15 09:29:40.050978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:51.717 [2024-11-15 09:29:40.052984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:51.717 [2024-11-15 09:29:40.053083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.717 "name": "Existed_Raid", 00:10:51.717 "uuid": "54282f27-abb6-43dd-8f51-284fc87a9dab", 00:10:51.717 "strip_size_kb": 64, 00:10:51.717 "state": "configuring", 00:10:51.717 "raid_level": "raid0", 00:10:51.717 "superblock": true, 00:10:51.717 "num_base_bdevs": 4, 00:10:51.717 "num_base_bdevs_discovered": 3, 00:10:51.717 "num_base_bdevs_operational": 4, 00:10:51.717 "base_bdevs_list": [ 00:10:51.717 { 00:10:51.717 "name": "BaseBdev1", 00:10:51.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.717 "is_configured": false, 00:10:51.717 "data_offset": 0, 00:10:51.717 "data_size": 0 00:10:51.717 }, 00:10:51.717 { 00:10:51.717 "name": "BaseBdev2", 00:10:51.717 "uuid": "dd9c66c0-ddcf-4279-b3e2-09f5d55819e9", 00:10:51.717 "is_configured": true, 00:10:51.717 "data_offset": 2048, 00:10:51.717 "data_size": 63488 00:10:51.717 }, 00:10:51.717 { 00:10:51.717 "name": "BaseBdev3", 00:10:51.717 "uuid": "9edacf1a-d869-4420-982d-010b27459400", 00:10:51.717 "is_configured": true, 00:10:51.717 "data_offset": 2048, 00:10:51.717 "data_size": 63488 00:10:51.717 }, 00:10:51.717 { 00:10:51.717 "name": "BaseBdev4", 00:10:51.717 "uuid": "bfb7f788-cfdb-4e6e-8006-4e7043628687", 00:10:51.717 "is_configured": true, 00:10:51.717 "data_offset": 2048, 00:10:51.717 "data_size": 63488 00:10:51.717 } 00:10:51.717 ] 00:10:51.717 }' 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.717 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.285 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:52.285 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.285 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.285 [2024-11-15 09:29:40.525970] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:52.285 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.285 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:52.285 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.285 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.285 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.285 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.285 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.285 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.285 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.285 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.285 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.285 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.285 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.285 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.285 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.285 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.285 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.285 "name": "Existed_Raid", 00:10:52.285 "uuid": "54282f27-abb6-43dd-8f51-284fc87a9dab", 00:10:52.285 "strip_size_kb": 64, 00:10:52.285 "state": "configuring", 00:10:52.285 "raid_level": "raid0", 00:10:52.285 "superblock": true, 00:10:52.285 "num_base_bdevs": 4, 00:10:52.285 "num_base_bdevs_discovered": 2, 00:10:52.285 "num_base_bdevs_operational": 4, 00:10:52.285 "base_bdevs_list": [ 00:10:52.285 { 00:10:52.285 "name": "BaseBdev1", 00:10:52.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.285 "is_configured": false, 00:10:52.285 "data_offset": 0, 00:10:52.285 "data_size": 0 00:10:52.285 }, 00:10:52.285 { 00:10:52.285 "name": null, 00:10:52.285 "uuid": "dd9c66c0-ddcf-4279-b3e2-09f5d55819e9", 00:10:52.285 "is_configured": false, 00:10:52.285 "data_offset": 0, 00:10:52.285 "data_size": 63488 00:10:52.285 }, 00:10:52.285 { 00:10:52.285 "name": "BaseBdev3", 00:10:52.285 "uuid": "9edacf1a-d869-4420-982d-010b27459400", 00:10:52.285 "is_configured": true, 00:10:52.285 "data_offset": 2048, 00:10:52.285 "data_size": 63488 00:10:52.285 }, 00:10:52.285 { 00:10:52.285 "name": "BaseBdev4", 00:10:52.285 "uuid": "bfb7f788-cfdb-4e6e-8006-4e7043628687", 00:10:52.285 "is_configured": true, 00:10:52.285 "data_offset": 2048, 00:10:52.285 "data_size": 63488 00:10:52.285 } 00:10:52.285 ] 00:10:52.285 }' 00:10:52.285 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.285 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.544 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:52.544 09:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.544 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.544 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.544 09:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.544 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:52.544 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:52.544 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.544 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.803 BaseBdev1 00:10:52.804 [2024-11-15 09:29:41.043141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.804 [ 00:10:52.804 { 00:10:52.804 "name": "BaseBdev1", 00:10:52.804 "aliases": [ 00:10:52.804 "5635534d-2ef0-43ef-a2a8-42c4ff6757a6" 00:10:52.804 ], 00:10:52.804 "product_name": "Malloc disk", 00:10:52.804 "block_size": 512, 00:10:52.804 "num_blocks": 65536, 00:10:52.804 "uuid": "5635534d-2ef0-43ef-a2a8-42c4ff6757a6", 00:10:52.804 "assigned_rate_limits": { 00:10:52.804 "rw_ios_per_sec": 0, 00:10:52.804 "rw_mbytes_per_sec": 0, 00:10:52.804 "r_mbytes_per_sec": 0, 00:10:52.804 "w_mbytes_per_sec": 0 00:10:52.804 }, 00:10:52.804 "claimed": true, 00:10:52.804 "claim_type": "exclusive_write", 00:10:52.804 "zoned": false, 00:10:52.804 "supported_io_types": { 00:10:52.804 "read": true, 00:10:52.804 "write": true, 00:10:52.804 "unmap": true, 00:10:52.804 "flush": true, 00:10:52.804 "reset": true, 00:10:52.804 "nvme_admin": false, 00:10:52.804 "nvme_io": false, 00:10:52.804 "nvme_io_md": false, 00:10:52.804 "write_zeroes": true, 00:10:52.804 "zcopy": true, 00:10:52.804 "get_zone_info": false, 00:10:52.804 "zone_management": false, 00:10:52.804 "zone_append": false, 00:10:52.804 "compare": false, 00:10:52.804 "compare_and_write": false, 00:10:52.804 "abort": true, 00:10:52.804 "seek_hole": false, 00:10:52.804 "seek_data": false, 00:10:52.804 "copy": true, 00:10:52.804 "nvme_iov_md": false 00:10:52.804 }, 00:10:52.804 "memory_domains": [ 00:10:52.804 { 00:10:52.804 "dma_device_id": "system", 00:10:52.804 "dma_device_type": 1 00:10:52.804 }, 00:10:52.804 { 00:10:52.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.804 "dma_device_type": 2 00:10:52.804 } 00:10:52.804 ], 00:10:52.804 "driver_specific": {} 00:10:52.804 } 00:10:52.804 ] 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.804 "name": "Existed_Raid", 00:10:52.804 "uuid": "54282f27-abb6-43dd-8f51-284fc87a9dab", 00:10:52.804 "strip_size_kb": 64, 00:10:52.804 "state": "configuring", 00:10:52.804 "raid_level": "raid0", 00:10:52.804 "superblock": true, 00:10:52.804 "num_base_bdevs": 4, 00:10:52.804 "num_base_bdevs_discovered": 3, 00:10:52.804 "num_base_bdevs_operational": 4, 00:10:52.804 "base_bdevs_list": [ 00:10:52.804 { 00:10:52.804 "name": "BaseBdev1", 00:10:52.804 "uuid": "5635534d-2ef0-43ef-a2a8-42c4ff6757a6", 00:10:52.804 "is_configured": true, 00:10:52.804 "data_offset": 2048, 00:10:52.804 "data_size": 63488 00:10:52.804 }, 00:10:52.804 { 00:10:52.804 "name": null, 00:10:52.804 "uuid": "dd9c66c0-ddcf-4279-b3e2-09f5d55819e9", 00:10:52.804 "is_configured": false, 00:10:52.804 "data_offset": 0, 00:10:52.804 "data_size": 63488 00:10:52.804 }, 00:10:52.804 { 00:10:52.804 "name": "BaseBdev3", 00:10:52.804 "uuid": "9edacf1a-d869-4420-982d-010b27459400", 00:10:52.804 "is_configured": true, 00:10:52.804 "data_offset": 2048, 00:10:52.804 "data_size": 63488 00:10:52.804 }, 00:10:52.804 { 00:10:52.804 "name": "BaseBdev4", 00:10:52.804 "uuid": "bfb7f788-cfdb-4e6e-8006-4e7043628687", 00:10:52.804 "is_configured": true, 00:10:52.804 "data_offset": 2048, 00:10:52.804 "data_size": 63488 00:10:52.804 } 00:10:52.804 ] 00:10:52.804 }' 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.804 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.372 [2024-11-15 09:29:41.582393] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.372 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.372 "name": "Existed_Raid", 00:10:53.372 "uuid": "54282f27-abb6-43dd-8f51-284fc87a9dab", 00:10:53.372 "strip_size_kb": 64, 00:10:53.373 "state": "configuring", 00:10:53.373 "raid_level": "raid0", 00:10:53.373 "superblock": true, 00:10:53.373 "num_base_bdevs": 4, 00:10:53.373 "num_base_bdevs_discovered": 2, 00:10:53.373 "num_base_bdevs_operational": 4, 00:10:53.373 "base_bdevs_list": [ 00:10:53.373 { 00:10:53.373 "name": "BaseBdev1", 00:10:53.373 "uuid": "5635534d-2ef0-43ef-a2a8-42c4ff6757a6", 00:10:53.373 "is_configured": true, 00:10:53.373 "data_offset": 2048, 00:10:53.373 "data_size": 63488 00:10:53.373 }, 00:10:53.373 { 00:10:53.373 "name": null, 00:10:53.373 "uuid": "dd9c66c0-ddcf-4279-b3e2-09f5d55819e9", 00:10:53.373 "is_configured": false, 00:10:53.373 "data_offset": 0, 00:10:53.373 "data_size": 63488 00:10:53.373 }, 00:10:53.373 { 00:10:53.373 "name": null, 00:10:53.373 "uuid": "9edacf1a-d869-4420-982d-010b27459400", 00:10:53.373 "is_configured": false, 00:10:53.373 "data_offset": 0, 00:10:53.373 "data_size": 63488 00:10:53.373 }, 00:10:53.373 { 00:10:53.373 "name": "BaseBdev4", 00:10:53.373 "uuid": "bfb7f788-cfdb-4e6e-8006-4e7043628687", 00:10:53.373 "is_configured": true, 00:10:53.373 "data_offset": 2048, 00:10:53.373 "data_size": 63488 00:10:53.373 } 00:10:53.373 ] 00:10:53.373 }' 00:10:53.373 09:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.373 09:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.631 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.631 09:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.631 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:53.631 09:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.631 09:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.631 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:53.631 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:53.631 09:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.631 09:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.891 [2024-11-15 09:29:42.097518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:53.891 09:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.891 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:53.891 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.891 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.891 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.891 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.891 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.891 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.891 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.891 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.891 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.891 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.891 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.891 09:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.891 09:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.891 09:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.891 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.891 "name": "Existed_Raid", 00:10:53.891 "uuid": "54282f27-abb6-43dd-8f51-284fc87a9dab", 00:10:53.891 "strip_size_kb": 64, 00:10:53.891 "state": "configuring", 00:10:53.891 "raid_level": "raid0", 00:10:53.891 "superblock": true, 00:10:53.891 "num_base_bdevs": 4, 00:10:53.891 "num_base_bdevs_discovered": 3, 00:10:53.891 "num_base_bdevs_operational": 4, 00:10:53.891 "base_bdevs_list": [ 00:10:53.891 { 00:10:53.891 "name": "BaseBdev1", 00:10:53.891 "uuid": "5635534d-2ef0-43ef-a2a8-42c4ff6757a6", 00:10:53.891 "is_configured": true, 00:10:53.891 "data_offset": 2048, 00:10:53.891 "data_size": 63488 00:10:53.891 }, 00:10:53.891 { 00:10:53.891 "name": null, 00:10:53.891 "uuid": "dd9c66c0-ddcf-4279-b3e2-09f5d55819e9", 00:10:53.891 "is_configured": false, 00:10:53.891 "data_offset": 0, 00:10:53.891 "data_size": 63488 00:10:53.891 }, 00:10:53.891 { 00:10:53.891 "name": "BaseBdev3", 00:10:53.891 "uuid": "9edacf1a-d869-4420-982d-010b27459400", 00:10:53.891 "is_configured": true, 00:10:53.891 "data_offset": 2048, 00:10:53.891 "data_size": 63488 00:10:53.891 }, 00:10:53.891 { 00:10:53.891 "name": "BaseBdev4", 00:10:53.891 "uuid": "bfb7f788-cfdb-4e6e-8006-4e7043628687", 00:10:53.891 "is_configured": true, 00:10:53.891 "data_offset": 2048, 00:10:53.891 "data_size": 63488 00:10:53.891 } 00:10:53.891 ] 00:10:53.891 }' 00:10:53.891 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.891 09:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.151 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:54.151 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.151 09:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.151 09:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.151 09:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.414 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:54.415 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:54.415 09:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.415 09:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.415 [2024-11-15 09:29:42.632675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:54.415 09:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.415 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:54.415 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.415 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.415 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.415 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.415 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.415 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.415 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.415 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.415 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.415 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.415 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.415 09:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.415 09:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.415 09:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.415 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.415 "name": "Existed_Raid", 00:10:54.415 "uuid": "54282f27-abb6-43dd-8f51-284fc87a9dab", 00:10:54.415 "strip_size_kb": 64, 00:10:54.415 "state": "configuring", 00:10:54.415 "raid_level": "raid0", 00:10:54.415 "superblock": true, 00:10:54.415 "num_base_bdevs": 4, 00:10:54.415 "num_base_bdevs_discovered": 2, 00:10:54.415 "num_base_bdevs_operational": 4, 00:10:54.415 "base_bdevs_list": [ 00:10:54.415 { 00:10:54.415 "name": null, 00:10:54.415 "uuid": "5635534d-2ef0-43ef-a2a8-42c4ff6757a6", 00:10:54.415 "is_configured": false, 00:10:54.415 "data_offset": 0, 00:10:54.415 "data_size": 63488 00:10:54.415 }, 00:10:54.415 { 00:10:54.415 "name": null, 00:10:54.415 "uuid": "dd9c66c0-ddcf-4279-b3e2-09f5d55819e9", 00:10:54.415 "is_configured": false, 00:10:54.415 "data_offset": 0, 00:10:54.415 "data_size": 63488 00:10:54.415 }, 00:10:54.415 { 00:10:54.415 "name": "BaseBdev3", 00:10:54.415 "uuid": "9edacf1a-d869-4420-982d-010b27459400", 00:10:54.415 "is_configured": true, 00:10:54.415 "data_offset": 2048, 00:10:54.415 "data_size": 63488 00:10:54.415 }, 00:10:54.415 { 00:10:54.415 "name": "BaseBdev4", 00:10:54.415 "uuid": "bfb7f788-cfdb-4e6e-8006-4e7043628687", 00:10:54.415 "is_configured": true, 00:10:54.415 "data_offset": 2048, 00:10:54.415 "data_size": 63488 00:10:54.415 } 00:10:54.415 ] 00:10:54.415 }' 00:10:54.415 09:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.415 09:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.006 [2024-11-15 09:29:43.260315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.006 "name": "Existed_Raid", 00:10:55.006 "uuid": "54282f27-abb6-43dd-8f51-284fc87a9dab", 00:10:55.006 "strip_size_kb": 64, 00:10:55.006 "state": "configuring", 00:10:55.006 "raid_level": "raid0", 00:10:55.006 "superblock": true, 00:10:55.006 "num_base_bdevs": 4, 00:10:55.006 "num_base_bdevs_discovered": 3, 00:10:55.006 "num_base_bdevs_operational": 4, 00:10:55.006 "base_bdevs_list": [ 00:10:55.006 { 00:10:55.006 "name": null, 00:10:55.006 "uuid": "5635534d-2ef0-43ef-a2a8-42c4ff6757a6", 00:10:55.006 "is_configured": false, 00:10:55.006 "data_offset": 0, 00:10:55.006 "data_size": 63488 00:10:55.006 }, 00:10:55.006 { 00:10:55.006 "name": "BaseBdev2", 00:10:55.006 "uuid": "dd9c66c0-ddcf-4279-b3e2-09f5d55819e9", 00:10:55.006 "is_configured": true, 00:10:55.006 "data_offset": 2048, 00:10:55.006 "data_size": 63488 00:10:55.006 }, 00:10:55.006 { 00:10:55.006 "name": "BaseBdev3", 00:10:55.006 "uuid": "9edacf1a-d869-4420-982d-010b27459400", 00:10:55.006 "is_configured": true, 00:10:55.006 "data_offset": 2048, 00:10:55.006 "data_size": 63488 00:10:55.006 }, 00:10:55.006 { 00:10:55.006 "name": "BaseBdev4", 00:10:55.006 "uuid": "bfb7f788-cfdb-4e6e-8006-4e7043628687", 00:10:55.006 "is_configured": true, 00:10:55.006 "data_offset": 2048, 00:10:55.006 "data_size": 63488 00:10:55.006 } 00:10:55.006 ] 00:10:55.006 }' 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.006 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5635534d-2ef0-43ef-a2a8-42c4ff6757a6 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.612 [2024-11-15 09:29:43.885412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:55.612 [2024-11-15 09:29:43.885776] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:55.612 NewBaseBdev 00:10:55.612 [2024-11-15 09:29:43.885823] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:55.612 [2024-11-15 09:29:43.886122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:55.612 [2024-11-15 09:29:43.886279] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:55.612 [2024-11-15 09:29:43.886293] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:55.612 [2024-11-15 09:29:43.886431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.612 [ 00:10:55.612 { 00:10:55.612 "name": "NewBaseBdev", 00:10:55.612 "aliases": [ 00:10:55.612 "5635534d-2ef0-43ef-a2a8-42c4ff6757a6" 00:10:55.612 ], 00:10:55.612 "product_name": "Malloc disk", 00:10:55.612 "block_size": 512, 00:10:55.612 "num_blocks": 65536, 00:10:55.612 "uuid": "5635534d-2ef0-43ef-a2a8-42c4ff6757a6", 00:10:55.612 "assigned_rate_limits": { 00:10:55.612 "rw_ios_per_sec": 0, 00:10:55.612 "rw_mbytes_per_sec": 0, 00:10:55.612 "r_mbytes_per_sec": 0, 00:10:55.612 "w_mbytes_per_sec": 0 00:10:55.612 }, 00:10:55.612 "claimed": true, 00:10:55.612 "claim_type": "exclusive_write", 00:10:55.612 "zoned": false, 00:10:55.612 "supported_io_types": { 00:10:55.612 "read": true, 00:10:55.612 "write": true, 00:10:55.612 "unmap": true, 00:10:55.612 "flush": true, 00:10:55.612 "reset": true, 00:10:55.612 "nvme_admin": false, 00:10:55.612 "nvme_io": false, 00:10:55.612 "nvme_io_md": false, 00:10:55.612 "write_zeroes": true, 00:10:55.612 "zcopy": true, 00:10:55.612 "get_zone_info": false, 00:10:55.612 "zone_management": false, 00:10:55.612 "zone_append": false, 00:10:55.612 "compare": false, 00:10:55.612 "compare_and_write": false, 00:10:55.612 "abort": true, 00:10:55.612 "seek_hole": false, 00:10:55.612 "seek_data": false, 00:10:55.612 "copy": true, 00:10:55.612 "nvme_iov_md": false 00:10:55.612 }, 00:10:55.612 "memory_domains": [ 00:10:55.612 { 00:10:55.612 "dma_device_id": "system", 00:10:55.612 "dma_device_type": 1 00:10:55.612 }, 00:10:55.612 { 00:10:55.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.612 "dma_device_type": 2 00:10:55.612 } 00:10:55.612 ], 00:10:55.612 "driver_specific": {} 00:10:55.612 } 00:10:55.612 ] 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.612 "name": "Existed_Raid", 00:10:55.612 "uuid": "54282f27-abb6-43dd-8f51-284fc87a9dab", 00:10:55.612 "strip_size_kb": 64, 00:10:55.612 "state": "online", 00:10:55.612 "raid_level": "raid0", 00:10:55.612 "superblock": true, 00:10:55.612 "num_base_bdevs": 4, 00:10:55.612 "num_base_bdevs_discovered": 4, 00:10:55.612 "num_base_bdevs_operational": 4, 00:10:55.612 "base_bdevs_list": [ 00:10:55.612 { 00:10:55.612 "name": "NewBaseBdev", 00:10:55.612 "uuid": "5635534d-2ef0-43ef-a2a8-42c4ff6757a6", 00:10:55.612 "is_configured": true, 00:10:55.612 "data_offset": 2048, 00:10:55.612 "data_size": 63488 00:10:55.612 }, 00:10:55.612 { 00:10:55.612 "name": "BaseBdev2", 00:10:55.612 "uuid": "dd9c66c0-ddcf-4279-b3e2-09f5d55819e9", 00:10:55.612 "is_configured": true, 00:10:55.612 "data_offset": 2048, 00:10:55.612 "data_size": 63488 00:10:55.612 }, 00:10:55.612 { 00:10:55.612 "name": "BaseBdev3", 00:10:55.612 "uuid": "9edacf1a-d869-4420-982d-010b27459400", 00:10:55.612 "is_configured": true, 00:10:55.612 "data_offset": 2048, 00:10:55.612 "data_size": 63488 00:10:55.612 }, 00:10:55.612 { 00:10:55.612 "name": "BaseBdev4", 00:10:55.612 "uuid": "bfb7f788-cfdb-4e6e-8006-4e7043628687", 00:10:55.612 "is_configured": true, 00:10:55.612 "data_offset": 2048, 00:10:55.612 "data_size": 63488 00:10:55.612 } 00:10:55.612 ] 00:10:55.612 }' 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.612 09:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.181 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:56.181 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:56.181 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:56.181 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:56.181 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:56.181 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:56.181 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:56.181 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:56.181 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.181 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.181 [2024-11-15 09:29:44.424955] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.181 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.181 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:56.181 "name": "Existed_Raid", 00:10:56.181 "aliases": [ 00:10:56.181 "54282f27-abb6-43dd-8f51-284fc87a9dab" 00:10:56.181 ], 00:10:56.181 "product_name": "Raid Volume", 00:10:56.181 "block_size": 512, 00:10:56.181 "num_blocks": 253952, 00:10:56.181 "uuid": "54282f27-abb6-43dd-8f51-284fc87a9dab", 00:10:56.181 "assigned_rate_limits": { 00:10:56.181 "rw_ios_per_sec": 0, 00:10:56.181 "rw_mbytes_per_sec": 0, 00:10:56.181 "r_mbytes_per_sec": 0, 00:10:56.181 "w_mbytes_per_sec": 0 00:10:56.181 }, 00:10:56.181 "claimed": false, 00:10:56.181 "zoned": false, 00:10:56.181 "supported_io_types": { 00:10:56.181 "read": true, 00:10:56.181 "write": true, 00:10:56.181 "unmap": true, 00:10:56.181 "flush": true, 00:10:56.181 "reset": true, 00:10:56.181 "nvme_admin": false, 00:10:56.181 "nvme_io": false, 00:10:56.181 "nvme_io_md": false, 00:10:56.181 "write_zeroes": true, 00:10:56.181 "zcopy": false, 00:10:56.181 "get_zone_info": false, 00:10:56.181 "zone_management": false, 00:10:56.181 "zone_append": false, 00:10:56.181 "compare": false, 00:10:56.181 "compare_and_write": false, 00:10:56.181 "abort": false, 00:10:56.181 "seek_hole": false, 00:10:56.181 "seek_data": false, 00:10:56.181 "copy": false, 00:10:56.181 "nvme_iov_md": false 00:10:56.181 }, 00:10:56.181 "memory_domains": [ 00:10:56.181 { 00:10:56.181 "dma_device_id": "system", 00:10:56.181 "dma_device_type": 1 00:10:56.181 }, 00:10:56.181 { 00:10:56.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.181 "dma_device_type": 2 00:10:56.181 }, 00:10:56.181 { 00:10:56.181 "dma_device_id": "system", 00:10:56.181 "dma_device_type": 1 00:10:56.181 }, 00:10:56.181 { 00:10:56.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.181 "dma_device_type": 2 00:10:56.181 }, 00:10:56.181 { 00:10:56.181 "dma_device_id": "system", 00:10:56.181 "dma_device_type": 1 00:10:56.181 }, 00:10:56.181 { 00:10:56.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.181 "dma_device_type": 2 00:10:56.181 }, 00:10:56.181 { 00:10:56.181 "dma_device_id": "system", 00:10:56.181 "dma_device_type": 1 00:10:56.181 }, 00:10:56.181 { 00:10:56.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.181 "dma_device_type": 2 00:10:56.181 } 00:10:56.181 ], 00:10:56.181 "driver_specific": { 00:10:56.181 "raid": { 00:10:56.181 "uuid": "54282f27-abb6-43dd-8f51-284fc87a9dab", 00:10:56.181 "strip_size_kb": 64, 00:10:56.181 "state": "online", 00:10:56.181 "raid_level": "raid0", 00:10:56.181 "superblock": true, 00:10:56.181 "num_base_bdevs": 4, 00:10:56.181 "num_base_bdevs_discovered": 4, 00:10:56.181 "num_base_bdevs_operational": 4, 00:10:56.181 "base_bdevs_list": [ 00:10:56.181 { 00:10:56.181 "name": "NewBaseBdev", 00:10:56.182 "uuid": "5635534d-2ef0-43ef-a2a8-42c4ff6757a6", 00:10:56.182 "is_configured": true, 00:10:56.182 "data_offset": 2048, 00:10:56.182 "data_size": 63488 00:10:56.182 }, 00:10:56.182 { 00:10:56.182 "name": "BaseBdev2", 00:10:56.182 "uuid": "dd9c66c0-ddcf-4279-b3e2-09f5d55819e9", 00:10:56.182 "is_configured": true, 00:10:56.182 "data_offset": 2048, 00:10:56.182 "data_size": 63488 00:10:56.182 }, 00:10:56.182 { 00:10:56.182 "name": "BaseBdev3", 00:10:56.182 "uuid": "9edacf1a-d869-4420-982d-010b27459400", 00:10:56.182 "is_configured": true, 00:10:56.182 "data_offset": 2048, 00:10:56.182 "data_size": 63488 00:10:56.182 }, 00:10:56.182 { 00:10:56.182 "name": "BaseBdev4", 00:10:56.182 "uuid": "bfb7f788-cfdb-4e6e-8006-4e7043628687", 00:10:56.182 "is_configured": true, 00:10:56.182 "data_offset": 2048, 00:10:56.182 "data_size": 63488 00:10:56.182 } 00:10:56.182 ] 00:10:56.182 } 00:10:56.182 } 00:10:56.182 }' 00:10:56.182 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:56.182 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:56.182 BaseBdev2 00:10:56.182 BaseBdev3 00:10:56.182 BaseBdev4' 00:10:56.182 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.182 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:56.182 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.182 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:56.182 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.182 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.182 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.182 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.182 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.182 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.182 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.182 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:56.182 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.182 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.182 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.182 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.440 [2024-11-15 09:29:44.772028] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:56.440 [2024-11-15 09:29:44.772062] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.440 [2024-11-15 09:29:44.772144] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.440 [2024-11-15 09:29:44.772209] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:56.440 [2024-11-15 09:29:44.772220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70400 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 70400 ']' 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 70400 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70400 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:56.440 killing process with pid 70400 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70400' 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 70400 00:10:56.440 [2024-11-15 09:29:44.821355] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:56.440 09:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 70400 00:10:57.008 [2024-11-15 09:29:45.243072] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:57.945 ************************************ 00:10:57.945 END TEST raid_state_function_test_sb 00:10:57.945 ************************************ 00:10:57.945 09:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:57.945 00:10:57.945 real 0m12.318s 00:10:57.945 user 0m19.551s 00:10:57.945 sys 0m2.317s 00:10:57.945 09:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:57.945 09:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.205 09:29:46 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:58.205 09:29:46 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:58.205 09:29:46 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:58.205 09:29:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:58.205 ************************************ 00:10:58.205 START TEST raid_superblock_test 00:10:58.205 ************************************ 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 4 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71090 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71090 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 71090 ']' 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:58.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:58.205 09:29:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.205 [2024-11-15 09:29:46.561952] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:10:58.205 [2024-11-15 09:29:46.562101] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71090 ] 00:10:58.465 [2024-11-15 09:29:46.743316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.465 [2024-11-15 09:29:46.882099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.725 [2024-11-15 09:29:47.114062] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.725 [2024-11-15 09:29:47.114153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.984 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:58.984 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:10:58.984 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:58.984 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:58.984 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:58.984 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:58.984 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:58.984 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:58.984 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:58.984 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:58.984 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:58.984 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.984 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.243 malloc1 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.243 [2024-11-15 09:29:47.470127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:59.243 [2024-11-15 09:29:47.470206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.243 [2024-11-15 09:29:47.470232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:59.243 [2024-11-15 09:29:47.470243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.243 [2024-11-15 09:29:47.472812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.243 [2024-11-15 09:29:47.472867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:59.243 pt1 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.243 malloc2 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.243 [2024-11-15 09:29:47.542521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:59.243 [2024-11-15 09:29:47.542596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.243 [2024-11-15 09:29:47.542640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:59.243 [2024-11-15 09:29:47.542661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.243 [2024-11-15 09:29:47.545402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.243 [2024-11-15 09:29:47.545446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:59.243 pt2 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.243 malloc3 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.243 [2024-11-15 09:29:47.628654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:59.243 [2024-11-15 09:29:47.628738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.243 [2024-11-15 09:29:47.628764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:59.243 [2024-11-15 09:29:47.628773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.243 [2024-11-15 09:29:47.631358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.243 [2024-11-15 09:29:47.631414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:59.243 pt3 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.243 malloc4 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.243 [2024-11-15 09:29:47.691709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:59.243 [2024-11-15 09:29:47.691770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.243 [2024-11-15 09:29:47.691789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:59.243 [2024-11-15 09:29:47.691798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.243 [2024-11-15 09:29:47.694241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.243 [2024-11-15 09:29:47.694275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:59.243 pt4 00:10:59.243 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.244 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:59.244 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:59.244 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:59.244 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.244 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.244 [2024-11-15 09:29:47.703716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:59.244 [2024-11-15 09:29:47.705875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:59.244 [2024-11-15 09:29:47.705960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:59.244 [2024-11-15 09:29:47.706027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:59.244 [2024-11-15 09:29:47.706259] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:59.244 [2024-11-15 09:29:47.706278] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:59.244 [2024-11-15 09:29:47.706570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:59.244 [2024-11-15 09:29:47.706764] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:59.244 [2024-11-15 09:29:47.706785] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:59.244 [2024-11-15 09:29:47.706954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.502 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.502 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:59.502 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.502 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.502 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.502 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.502 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.502 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.502 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.502 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.502 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.502 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.502 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.502 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.502 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.502 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.502 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.502 "name": "raid_bdev1", 00:10:59.502 "uuid": "91ac76dc-e1a7-408f-bb91-44fecda903fb", 00:10:59.502 "strip_size_kb": 64, 00:10:59.502 "state": "online", 00:10:59.502 "raid_level": "raid0", 00:10:59.502 "superblock": true, 00:10:59.502 "num_base_bdevs": 4, 00:10:59.503 "num_base_bdevs_discovered": 4, 00:10:59.503 "num_base_bdevs_operational": 4, 00:10:59.503 "base_bdevs_list": [ 00:10:59.503 { 00:10:59.503 "name": "pt1", 00:10:59.503 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:59.503 "is_configured": true, 00:10:59.503 "data_offset": 2048, 00:10:59.503 "data_size": 63488 00:10:59.503 }, 00:10:59.503 { 00:10:59.503 "name": "pt2", 00:10:59.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:59.503 "is_configured": true, 00:10:59.503 "data_offset": 2048, 00:10:59.503 "data_size": 63488 00:10:59.503 }, 00:10:59.503 { 00:10:59.503 "name": "pt3", 00:10:59.503 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:59.503 "is_configured": true, 00:10:59.503 "data_offset": 2048, 00:10:59.503 "data_size": 63488 00:10:59.503 }, 00:10:59.503 { 00:10:59.503 "name": "pt4", 00:10:59.503 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:59.503 "is_configured": true, 00:10:59.503 "data_offset": 2048, 00:10:59.503 "data_size": 63488 00:10:59.503 } 00:10:59.503 ] 00:10:59.503 }' 00:10:59.503 09:29:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.503 09:29:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.761 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:59.761 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:59.761 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:59.761 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:59.761 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:59.761 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:59.761 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:59.761 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:59.761 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.761 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.761 [2024-11-15 09:29:48.203253] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.019 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.019 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:00.019 "name": "raid_bdev1", 00:11:00.019 "aliases": [ 00:11:00.019 "91ac76dc-e1a7-408f-bb91-44fecda903fb" 00:11:00.019 ], 00:11:00.019 "product_name": "Raid Volume", 00:11:00.019 "block_size": 512, 00:11:00.019 "num_blocks": 253952, 00:11:00.019 "uuid": "91ac76dc-e1a7-408f-bb91-44fecda903fb", 00:11:00.019 "assigned_rate_limits": { 00:11:00.019 "rw_ios_per_sec": 0, 00:11:00.019 "rw_mbytes_per_sec": 0, 00:11:00.019 "r_mbytes_per_sec": 0, 00:11:00.019 "w_mbytes_per_sec": 0 00:11:00.019 }, 00:11:00.019 "claimed": false, 00:11:00.019 "zoned": false, 00:11:00.019 "supported_io_types": { 00:11:00.019 "read": true, 00:11:00.019 "write": true, 00:11:00.019 "unmap": true, 00:11:00.019 "flush": true, 00:11:00.019 "reset": true, 00:11:00.019 "nvme_admin": false, 00:11:00.019 "nvme_io": false, 00:11:00.019 "nvme_io_md": false, 00:11:00.019 "write_zeroes": true, 00:11:00.019 "zcopy": false, 00:11:00.019 "get_zone_info": false, 00:11:00.019 "zone_management": false, 00:11:00.019 "zone_append": false, 00:11:00.019 "compare": false, 00:11:00.019 "compare_and_write": false, 00:11:00.019 "abort": false, 00:11:00.019 "seek_hole": false, 00:11:00.019 "seek_data": false, 00:11:00.019 "copy": false, 00:11:00.019 "nvme_iov_md": false 00:11:00.019 }, 00:11:00.019 "memory_domains": [ 00:11:00.019 { 00:11:00.019 "dma_device_id": "system", 00:11:00.019 "dma_device_type": 1 00:11:00.019 }, 00:11:00.019 { 00:11:00.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.019 "dma_device_type": 2 00:11:00.019 }, 00:11:00.019 { 00:11:00.019 "dma_device_id": "system", 00:11:00.019 "dma_device_type": 1 00:11:00.019 }, 00:11:00.019 { 00:11:00.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.019 "dma_device_type": 2 00:11:00.019 }, 00:11:00.019 { 00:11:00.019 "dma_device_id": "system", 00:11:00.019 "dma_device_type": 1 00:11:00.019 }, 00:11:00.019 { 00:11:00.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.019 "dma_device_type": 2 00:11:00.019 }, 00:11:00.019 { 00:11:00.019 "dma_device_id": "system", 00:11:00.019 "dma_device_type": 1 00:11:00.019 }, 00:11:00.019 { 00:11:00.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.019 "dma_device_type": 2 00:11:00.019 } 00:11:00.019 ], 00:11:00.019 "driver_specific": { 00:11:00.019 "raid": { 00:11:00.019 "uuid": "91ac76dc-e1a7-408f-bb91-44fecda903fb", 00:11:00.019 "strip_size_kb": 64, 00:11:00.019 "state": "online", 00:11:00.019 "raid_level": "raid0", 00:11:00.019 "superblock": true, 00:11:00.019 "num_base_bdevs": 4, 00:11:00.019 "num_base_bdevs_discovered": 4, 00:11:00.019 "num_base_bdevs_operational": 4, 00:11:00.019 "base_bdevs_list": [ 00:11:00.019 { 00:11:00.019 "name": "pt1", 00:11:00.019 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:00.019 "is_configured": true, 00:11:00.019 "data_offset": 2048, 00:11:00.019 "data_size": 63488 00:11:00.019 }, 00:11:00.019 { 00:11:00.019 "name": "pt2", 00:11:00.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.019 "is_configured": true, 00:11:00.019 "data_offset": 2048, 00:11:00.019 "data_size": 63488 00:11:00.019 }, 00:11:00.019 { 00:11:00.019 "name": "pt3", 00:11:00.019 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:00.019 "is_configured": true, 00:11:00.019 "data_offset": 2048, 00:11:00.019 "data_size": 63488 00:11:00.019 }, 00:11:00.019 { 00:11:00.019 "name": "pt4", 00:11:00.019 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:00.019 "is_configured": true, 00:11:00.019 "data_offset": 2048, 00:11:00.019 "data_size": 63488 00:11:00.019 } 00:11:00.019 ] 00:11:00.019 } 00:11:00.019 } 00:11:00.019 }' 00:11:00.019 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:00.019 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:00.019 pt2 00:11:00.019 pt3 00:11:00.019 pt4' 00:11:00.019 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.019 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:00.019 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.019 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.019 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:00.019 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.019 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.019 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.019 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.019 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.019 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.019 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:00.019 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.019 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.019 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.019 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.019 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.020 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.020 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.020 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.020 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:00.020 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.020 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.020 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.020 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.020 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.020 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.020 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:00.020 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.020 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.020 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.020 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.278 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.278 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.278 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:00.278 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.278 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.278 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:00.278 [2024-11-15 09:29:48.506694] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.278 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.278 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=91ac76dc-e1a7-408f-bb91-44fecda903fb 00:11:00.278 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 91ac76dc-e1a7-408f-bb91-44fecda903fb ']' 00:11:00.278 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:00.278 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.278 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.278 [2024-11-15 09:29:48.558268] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:00.278 [2024-11-15 09:29:48.558309] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:00.278 [2024-11-15 09:29:48.558433] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.278 [2024-11-15 09:29:48.558522] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.278 [2024-11-15 09:29:48.558541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:00.278 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.278 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:00.278 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.278 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.279 [2024-11-15 09:29:48.714077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:00.279 [2024-11-15 09:29:48.716428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:00.279 [2024-11-15 09:29:48.716489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:00.279 [2024-11-15 09:29:48.716525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:00.279 [2024-11-15 09:29:48.716588] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:00.279 [2024-11-15 09:29:48.716649] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:00.279 [2024-11-15 09:29:48.716669] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:00.279 [2024-11-15 09:29:48.716689] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:00.279 [2024-11-15 09:29:48.716705] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:00.279 [2024-11-15 09:29:48.716720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:00.279 request: 00:11:00.279 { 00:11:00.279 "name": "raid_bdev1", 00:11:00.279 "raid_level": "raid0", 00:11:00.279 "base_bdevs": [ 00:11:00.279 "malloc1", 00:11:00.279 "malloc2", 00:11:00.279 "malloc3", 00:11:00.279 "malloc4" 00:11:00.279 ], 00:11:00.279 "strip_size_kb": 64, 00:11:00.279 "superblock": false, 00:11:00.279 "method": "bdev_raid_create", 00:11:00.279 "req_id": 1 00:11:00.279 } 00:11:00.279 Got JSON-RPC error response 00:11:00.279 response: 00:11:00.279 { 00:11:00.279 "code": -17, 00:11:00.279 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:00.279 } 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.279 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.538 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:00.538 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:00.538 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:00.538 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.538 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.538 [2024-11-15 09:29:48.769891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:00.538 [2024-11-15 09:29:48.769973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.538 [2024-11-15 09:29:48.769991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:00.538 [2024-11-15 09:29:48.770003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.538 [2024-11-15 09:29:48.772620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.538 [2024-11-15 09:29:48.772665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:00.538 [2024-11-15 09:29:48.772758] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:00.538 [2024-11-15 09:29:48.772838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:00.538 pt1 00:11:00.538 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.538 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:00.538 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.538 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.538 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.538 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.538 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.538 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.538 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.538 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.538 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.538 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.538 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.538 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.538 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.538 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.538 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.538 "name": "raid_bdev1", 00:11:00.538 "uuid": "91ac76dc-e1a7-408f-bb91-44fecda903fb", 00:11:00.538 "strip_size_kb": 64, 00:11:00.538 "state": "configuring", 00:11:00.538 "raid_level": "raid0", 00:11:00.538 "superblock": true, 00:11:00.538 "num_base_bdevs": 4, 00:11:00.538 "num_base_bdevs_discovered": 1, 00:11:00.538 "num_base_bdevs_operational": 4, 00:11:00.538 "base_bdevs_list": [ 00:11:00.538 { 00:11:00.538 "name": "pt1", 00:11:00.538 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:00.538 "is_configured": true, 00:11:00.538 "data_offset": 2048, 00:11:00.538 "data_size": 63488 00:11:00.538 }, 00:11:00.538 { 00:11:00.538 "name": null, 00:11:00.538 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.538 "is_configured": false, 00:11:00.538 "data_offset": 2048, 00:11:00.538 "data_size": 63488 00:11:00.538 }, 00:11:00.538 { 00:11:00.538 "name": null, 00:11:00.538 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:00.538 "is_configured": false, 00:11:00.538 "data_offset": 2048, 00:11:00.538 "data_size": 63488 00:11:00.538 }, 00:11:00.538 { 00:11:00.538 "name": null, 00:11:00.538 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:00.538 "is_configured": false, 00:11:00.538 "data_offset": 2048, 00:11:00.538 "data_size": 63488 00:11:00.538 } 00:11:00.538 ] 00:11:00.538 }' 00:11:00.538 09:29:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.538 09:29:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.797 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:00.797 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:00.797 09:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.797 09:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.797 [2024-11-15 09:29:49.237144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:00.797 [2024-11-15 09:29:49.237275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.797 [2024-11-15 09:29:49.237298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:00.797 [2024-11-15 09:29:49.237311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.797 [2024-11-15 09:29:49.237850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.797 [2024-11-15 09:29:49.237893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:00.797 [2024-11-15 09:29:49.237995] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:00.797 [2024-11-15 09:29:49.238037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:00.797 pt2 00:11:00.797 09:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.797 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:00.797 09:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.797 09:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.797 [2024-11-15 09:29:49.245114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:00.797 09:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.797 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:00.797 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.797 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.797 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.797 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.797 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.797 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.797 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.797 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.797 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.797 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.797 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.797 09:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.797 09:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.055 09:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.055 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.055 "name": "raid_bdev1", 00:11:01.055 "uuid": "91ac76dc-e1a7-408f-bb91-44fecda903fb", 00:11:01.055 "strip_size_kb": 64, 00:11:01.055 "state": "configuring", 00:11:01.055 "raid_level": "raid0", 00:11:01.055 "superblock": true, 00:11:01.055 "num_base_bdevs": 4, 00:11:01.055 "num_base_bdevs_discovered": 1, 00:11:01.055 "num_base_bdevs_operational": 4, 00:11:01.055 "base_bdevs_list": [ 00:11:01.055 { 00:11:01.055 "name": "pt1", 00:11:01.055 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.055 "is_configured": true, 00:11:01.055 "data_offset": 2048, 00:11:01.055 "data_size": 63488 00:11:01.055 }, 00:11:01.055 { 00:11:01.055 "name": null, 00:11:01.055 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.055 "is_configured": false, 00:11:01.055 "data_offset": 0, 00:11:01.055 "data_size": 63488 00:11:01.055 }, 00:11:01.055 { 00:11:01.055 "name": null, 00:11:01.055 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:01.055 "is_configured": false, 00:11:01.055 "data_offset": 2048, 00:11:01.055 "data_size": 63488 00:11:01.055 }, 00:11:01.055 { 00:11:01.055 "name": null, 00:11:01.055 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:01.055 "is_configured": false, 00:11:01.055 "data_offset": 2048, 00:11:01.055 "data_size": 63488 00:11:01.055 } 00:11:01.055 ] 00:11:01.055 }' 00:11:01.055 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.055 09:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.313 [2024-11-15 09:29:49.748249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:01.313 [2024-11-15 09:29:49.748346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.313 [2024-11-15 09:29:49.748370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:01.313 [2024-11-15 09:29:49.748380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.313 [2024-11-15 09:29:49.748931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.313 [2024-11-15 09:29:49.748957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:01.313 [2024-11-15 09:29:49.749059] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:01.313 [2024-11-15 09:29:49.749088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:01.313 pt2 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.313 [2024-11-15 09:29:49.756187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:01.313 [2024-11-15 09:29:49.756244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.313 [2024-11-15 09:29:49.756271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:01.313 [2024-11-15 09:29:49.756282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.313 [2024-11-15 09:29:49.756705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.313 [2024-11-15 09:29:49.756734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:01.313 [2024-11-15 09:29:49.756811] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:01.313 [2024-11-15 09:29:49.756831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:01.313 pt3 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.313 [2024-11-15 09:29:49.764152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:01.313 [2024-11-15 09:29:49.764203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.313 [2024-11-15 09:29:49.764222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:01.313 [2024-11-15 09:29:49.764230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.313 [2024-11-15 09:29:49.764604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.313 [2024-11-15 09:29:49.764625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:01.313 [2024-11-15 09:29:49.764689] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:01.313 [2024-11-15 09:29:49.764707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:01.313 [2024-11-15 09:29:49.764843] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:01.313 [2024-11-15 09:29:49.764869] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:01.313 [2024-11-15 09:29:49.765115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:01.313 [2024-11-15 09:29:49.765270] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:01.313 [2024-11-15 09:29:49.765289] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:01.313 [2024-11-15 09:29:49.765428] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.313 pt4 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.313 09:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.571 09:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.571 09:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.571 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.571 "name": "raid_bdev1", 00:11:01.571 "uuid": "91ac76dc-e1a7-408f-bb91-44fecda903fb", 00:11:01.571 "strip_size_kb": 64, 00:11:01.571 "state": "online", 00:11:01.571 "raid_level": "raid0", 00:11:01.571 "superblock": true, 00:11:01.572 "num_base_bdevs": 4, 00:11:01.572 "num_base_bdevs_discovered": 4, 00:11:01.572 "num_base_bdevs_operational": 4, 00:11:01.572 "base_bdevs_list": [ 00:11:01.572 { 00:11:01.572 "name": "pt1", 00:11:01.572 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.572 "is_configured": true, 00:11:01.572 "data_offset": 2048, 00:11:01.572 "data_size": 63488 00:11:01.572 }, 00:11:01.572 { 00:11:01.572 "name": "pt2", 00:11:01.572 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.572 "is_configured": true, 00:11:01.572 "data_offset": 2048, 00:11:01.572 "data_size": 63488 00:11:01.572 }, 00:11:01.572 { 00:11:01.572 "name": "pt3", 00:11:01.572 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:01.572 "is_configured": true, 00:11:01.572 "data_offset": 2048, 00:11:01.572 "data_size": 63488 00:11:01.572 }, 00:11:01.572 { 00:11:01.572 "name": "pt4", 00:11:01.572 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:01.572 "is_configured": true, 00:11:01.572 "data_offset": 2048, 00:11:01.572 "data_size": 63488 00:11:01.572 } 00:11:01.572 ] 00:11:01.572 }' 00:11:01.572 09:29:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.572 09:29:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.830 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:01.830 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:01.830 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:01.830 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:01.830 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:01.830 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:01.830 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:01.830 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:01.830 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.830 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.830 [2024-11-15 09:29:50.235807] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.830 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.830 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:01.830 "name": "raid_bdev1", 00:11:01.830 "aliases": [ 00:11:01.830 "91ac76dc-e1a7-408f-bb91-44fecda903fb" 00:11:01.830 ], 00:11:01.830 "product_name": "Raid Volume", 00:11:01.830 "block_size": 512, 00:11:01.830 "num_blocks": 253952, 00:11:01.830 "uuid": "91ac76dc-e1a7-408f-bb91-44fecda903fb", 00:11:01.830 "assigned_rate_limits": { 00:11:01.830 "rw_ios_per_sec": 0, 00:11:01.830 "rw_mbytes_per_sec": 0, 00:11:01.830 "r_mbytes_per_sec": 0, 00:11:01.830 "w_mbytes_per_sec": 0 00:11:01.830 }, 00:11:01.830 "claimed": false, 00:11:01.830 "zoned": false, 00:11:01.830 "supported_io_types": { 00:11:01.830 "read": true, 00:11:01.830 "write": true, 00:11:01.830 "unmap": true, 00:11:01.830 "flush": true, 00:11:01.830 "reset": true, 00:11:01.830 "nvme_admin": false, 00:11:01.830 "nvme_io": false, 00:11:01.830 "nvme_io_md": false, 00:11:01.830 "write_zeroes": true, 00:11:01.830 "zcopy": false, 00:11:01.830 "get_zone_info": false, 00:11:01.830 "zone_management": false, 00:11:01.830 "zone_append": false, 00:11:01.830 "compare": false, 00:11:01.830 "compare_and_write": false, 00:11:01.830 "abort": false, 00:11:01.830 "seek_hole": false, 00:11:01.830 "seek_data": false, 00:11:01.830 "copy": false, 00:11:01.830 "nvme_iov_md": false 00:11:01.830 }, 00:11:01.830 "memory_domains": [ 00:11:01.830 { 00:11:01.830 "dma_device_id": "system", 00:11:01.830 "dma_device_type": 1 00:11:01.830 }, 00:11:01.830 { 00:11:01.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.830 "dma_device_type": 2 00:11:01.830 }, 00:11:01.830 { 00:11:01.830 "dma_device_id": "system", 00:11:01.830 "dma_device_type": 1 00:11:01.830 }, 00:11:01.830 { 00:11:01.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.830 "dma_device_type": 2 00:11:01.830 }, 00:11:01.830 { 00:11:01.831 "dma_device_id": "system", 00:11:01.831 "dma_device_type": 1 00:11:01.831 }, 00:11:01.831 { 00:11:01.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.831 "dma_device_type": 2 00:11:01.831 }, 00:11:01.831 { 00:11:01.831 "dma_device_id": "system", 00:11:01.831 "dma_device_type": 1 00:11:01.831 }, 00:11:01.831 { 00:11:01.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.831 "dma_device_type": 2 00:11:01.831 } 00:11:01.831 ], 00:11:01.831 "driver_specific": { 00:11:01.831 "raid": { 00:11:01.831 "uuid": "91ac76dc-e1a7-408f-bb91-44fecda903fb", 00:11:01.831 "strip_size_kb": 64, 00:11:01.831 "state": "online", 00:11:01.831 "raid_level": "raid0", 00:11:01.831 "superblock": true, 00:11:01.831 "num_base_bdevs": 4, 00:11:01.831 "num_base_bdevs_discovered": 4, 00:11:01.831 "num_base_bdevs_operational": 4, 00:11:01.831 "base_bdevs_list": [ 00:11:01.831 { 00:11:01.831 "name": "pt1", 00:11:01.831 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.831 "is_configured": true, 00:11:01.831 "data_offset": 2048, 00:11:01.831 "data_size": 63488 00:11:01.831 }, 00:11:01.831 { 00:11:01.831 "name": "pt2", 00:11:01.831 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.831 "is_configured": true, 00:11:01.831 "data_offset": 2048, 00:11:01.831 "data_size": 63488 00:11:01.831 }, 00:11:01.831 { 00:11:01.831 "name": "pt3", 00:11:01.831 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:01.831 "is_configured": true, 00:11:01.831 "data_offset": 2048, 00:11:01.831 "data_size": 63488 00:11:01.831 }, 00:11:01.831 { 00:11:01.831 "name": "pt4", 00:11:01.831 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:01.831 "is_configured": true, 00:11:01.831 "data_offset": 2048, 00:11:01.831 "data_size": 63488 00:11:01.831 } 00:11:01.831 ] 00:11:01.831 } 00:11:01.831 } 00:11:01.831 }' 00:11:01.831 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:01.831 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:01.831 pt2 00:11:01.831 pt3 00:11:01.831 pt4' 00:11:01.831 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:02.089 [2024-11-15 09:29:50.503334] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 91ac76dc-e1a7-408f-bb91-44fecda903fb '!=' 91ac76dc-e1a7-408f-bb91-44fecda903fb ']' 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:02.089 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:02.090 09:29:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71090 00:11:02.090 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 71090 ']' 00:11:02.090 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 71090 00:11:02.090 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:11:02.090 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:02.348 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71090 00:11:02.348 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:02.348 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:02.348 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71090' 00:11:02.348 killing process with pid 71090 00:11:02.348 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 71090 00:11:02.348 [2024-11-15 09:29:50.577465] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:02.348 [2024-11-15 09:29:50.577593] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.348 09:29:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 71090 00:11:02.348 [2024-11-15 09:29:50.577685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.348 [2024-11-15 09:29:50.577699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:02.606 [2024-11-15 09:29:51.022180] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.980 09:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:03.980 00:11:03.980 real 0m5.820s 00:11:03.980 user 0m8.123s 00:11:03.980 sys 0m1.110s 00:11:03.980 09:29:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:03.980 09:29:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.980 ************************************ 00:11:03.980 END TEST raid_superblock_test 00:11:03.980 ************************************ 00:11:03.980 09:29:52 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:03.980 09:29:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:03.980 09:29:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:03.980 09:29:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.980 ************************************ 00:11:03.980 START TEST raid_read_error_test 00:11:03.980 ************************************ 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 read 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jIjTZlvRvY 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71355 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71355 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 71355 ']' 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:03.980 09:29:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.238 [2024-11-15 09:29:52.480711] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:11:04.238 [2024-11-15 09:29:52.480992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71355 ] 00:11:04.238 [2024-11-15 09:29:52.665059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.497 [2024-11-15 09:29:52.813443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.755 [2024-11-15 09:29:53.075609] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.755 [2024-11-15 09:29:53.075701] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.012 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:05.012 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:05.012 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:05.012 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:05.012 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.012 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.012 BaseBdev1_malloc 00:11:05.012 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.012 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:05.012 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.012 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.012 true 00:11:05.013 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.013 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:05.013 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.013 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.013 [2024-11-15 09:29:53.424195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:05.013 [2024-11-15 09:29:53.424298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.013 [2024-11-15 09:29:53.424329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:05.013 [2024-11-15 09:29:53.424342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.013 [2024-11-15 09:29:53.427051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.013 [2024-11-15 09:29:53.427092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:05.013 BaseBdev1 00:11:05.013 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.013 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:05.013 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:05.013 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.013 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.270 BaseBdev2_malloc 00:11:05.270 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.270 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:05.270 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.270 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.270 true 00:11:05.270 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.270 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:05.270 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.270 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.270 [2024-11-15 09:29:53.500186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:05.270 [2024-11-15 09:29:53.500256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.270 [2024-11-15 09:29:53.500293] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:05.270 [2024-11-15 09:29:53.500305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.270 [2024-11-15 09:29:53.502869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.270 [2024-11-15 09:29:53.502908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:05.270 BaseBdev2 00:11:05.270 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.270 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:05.270 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:05.270 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.270 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.270 BaseBdev3_malloc 00:11:05.270 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.270 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:05.270 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.270 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.270 true 00:11:05.270 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.270 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:05.270 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.270 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.271 [2024-11-15 09:29:53.582973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:05.271 [2024-11-15 09:29:53.583052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.271 [2024-11-15 09:29:53.583076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:05.271 [2024-11-15 09:29:53.583088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.271 [2024-11-15 09:29:53.585728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.271 [2024-11-15 09:29:53.585775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:05.271 BaseBdev3 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.271 BaseBdev4_malloc 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.271 true 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.271 [2024-11-15 09:29:53.657345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:05.271 [2024-11-15 09:29:53.657517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.271 [2024-11-15 09:29:53.657566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:05.271 [2024-11-15 09:29:53.657579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.271 [2024-11-15 09:29:53.660200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.271 [2024-11-15 09:29:53.660251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:05.271 BaseBdev4 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.271 [2024-11-15 09:29:53.669417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.271 [2024-11-15 09:29:53.671664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.271 [2024-11-15 09:29:53.671808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:05.271 [2024-11-15 09:29:53.671906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:05.271 [2024-11-15 09:29:53.672221] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:05.271 [2024-11-15 09:29:53.672240] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:05.271 [2024-11-15 09:29:53.672585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:05.271 [2024-11-15 09:29:53.672795] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:05.271 [2024-11-15 09:29:53.672809] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:05.271 [2024-11-15 09:29:53.673054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.271 "name": "raid_bdev1", 00:11:05.271 "uuid": "c16dfc2c-33e1-40f3-9b74-7229ad2d217e", 00:11:05.271 "strip_size_kb": 64, 00:11:05.271 "state": "online", 00:11:05.271 "raid_level": "raid0", 00:11:05.271 "superblock": true, 00:11:05.271 "num_base_bdevs": 4, 00:11:05.271 "num_base_bdevs_discovered": 4, 00:11:05.271 "num_base_bdevs_operational": 4, 00:11:05.271 "base_bdevs_list": [ 00:11:05.271 { 00:11:05.271 "name": "BaseBdev1", 00:11:05.271 "uuid": "e2d79bec-8f7e-5863-ad86-07f5c5cb0aa6", 00:11:05.271 "is_configured": true, 00:11:05.271 "data_offset": 2048, 00:11:05.271 "data_size": 63488 00:11:05.271 }, 00:11:05.271 { 00:11:05.271 "name": "BaseBdev2", 00:11:05.271 "uuid": "b2b29bb4-eb3d-5761-b34e-7db2a6fe0984", 00:11:05.271 "is_configured": true, 00:11:05.271 "data_offset": 2048, 00:11:05.271 "data_size": 63488 00:11:05.271 }, 00:11:05.271 { 00:11:05.271 "name": "BaseBdev3", 00:11:05.271 "uuid": "dca394e9-7fd7-5c45-bd0a-71c8bc6448e7", 00:11:05.271 "is_configured": true, 00:11:05.271 "data_offset": 2048, 00:11:05.271 "data_size": 63488 00:11:05.271 }, 00:11:05.271 { 00:11:05.271 "name": "BaseBdev4", 00:11:05.271 "uuid": "88742907-391c-59e6-af22-6bf4aeae478a", 00:11:05.271 "is_configured": true, 00:11:05.271 "data_offset": 2048, 00:11:05.271 "data_size": 63488 00:11:05.271 } 00:11:05.271 ] 00:11:05.271 }' 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.271 09:29:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.835 09:29:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:05.835 09:29:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:05.835 [2024-11-15 09:29:54.246008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:06.770 09:29:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:06.770 09:29:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.770 09:29:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.770 09:29:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.770 09:29:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:06.770 09:29:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:06.770 09:29:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:06.770 09:29:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:06.770 09:29:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.770 09:29:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.770 09:29:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:06.770 09:29:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.770 09:29:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.770 09:29:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.770 09:29:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.770 09:29:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.770 09:29:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.770 09:29:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.770 09:29:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.770 09:29:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.770 09:29:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.770 09:29:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.770 09:29:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.770 "name": "raid_bdev1", 00:11:06.770 "uuid": "c16dfc2c-33e1-40f3-9b74-7229ad2d217e", 00:11:06.770 "strip_size_kb": 64, 00:11:06.770 "state": "online", 00:11:06.770 "raid_level": "raid0", 00:11:06.770 "superblock": true, 00:11:06.770 "num_base_bdevs": 4, 00:11:06.770 "num_base_bdevs_discovered": 4, 00:11:06.770 "num_base_bdevs_operational": 4, 00:11:06.770 "base_bdevs_list": [ 00:11:06.770 { 00:11:06.770 "name": "BaseBdev1", 00:11:06.770 "uuid": "e2d79bec-8f7e-5863-ad86-07f5c5cb0aa6", 00:11:06.770 "is_configured": true, 00:11:06.770 "data_offset": 2048, 00:11:06.770 "data_size": 63488 00:11:06.770 }, 00:11:06.770 { 00:11:06.770 "name": "BaseBdev2", 00:11:06.771 "uuid": "b2b29bb4-eb3d-5761-b34e-7db2a6fe0984", 00:11:06.771 "is_configured": true, 00:11:06.771 "data_offset": 2048, 00:11:06.771 "data_size": 63488 00:11:06.771 }, 00:11:06.771 { 00:11:06.771 "name": "BaseBdev3", 00:11:06.771 "uuid": "dca394e9-7fd7-5c45-bd0a-71c8bc6448e7", 00:11:06.771 "is_configured": true, 00:11:06.771 "data_offset": 2048, 00:11:06.771 "data_size": 63488 00:11:06.771 }, 00:11:06.771 { 00:11:06.771 "name": "BaseBdev4", 00:11:06.771 "uuid": "88742907-391c-59e6-af22-6bf4aeae478a", 00:11:06.771 "is_configured": true, 00:11:06.771 "data_offset": 2048, 00:11:06.771 "data_size": 63488 00:11:06.771 } 00:11:06.771 ] 00:11:06.771 }' 00:11:06.771 09:29:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.771 09:29:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.339 09:29:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:07.339 09:29:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.339 09:29:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.339 [2024-11-15 09:29:55.632479] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:07.339 [2024-11-15 09:29:55.632528] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.339 [2024-11-15 09:29:55.635525] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.339 [2024-11-15 09:29:55.635598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.339 [2024-11-15 09:29:55.635650] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:07.339 [2024-11-15 09:29:55.635663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:07.339 { 00:11:07.339 "results": [ 00:11:07.339 { 00:11:07.339 "job": "raid_bdev1", 00:11:07.339 "core_mask": "0x1", 00:11:07.339 "workload": "randrw", 00:11:07.339 "percentage": 50, 00:11:07.339 "status": "finished", 00:11:07.339 "queue_depth": 1, 00:11:07.339 "io_size": 131072, 00:11:07.339 "runtime": 1.386877, 00:11:07.339 "iops": 12394.033501168453, 00:11:07.339 "mibps": 1549.2541876460566, 00:11:07.339 "io_failed": 1, 00:11:07.339 "io_timeout": 0, 00:11:07.339 "avg_latency_us": 113.90662287152834, 00:11:07.339 "min_latency_us": 27.165065502183406, 00:11:07.339 "max_latency_us": 1395.1441048034935 00:11:07.339 } 00:11:07.339 ], 00:11:07.339 "core_count": 1 00:11:07.339 } 00:11:07.339 09:29:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.339 09:29:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71355 00:11:07.340 09:29:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 71355 ']' 00:11:07.340 09:29:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 71355 00:11:07.340 09:29:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:11:07.340 09:29:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:07.340 09:29:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71355 00:11:07.340 09:29:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:07.340 09:29:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:07.340 09:29:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71355' 00:11:07.340 killing process with pid 71355 00:11:07.340 09:29:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 71355 00:11:07.340 [2024-11-15 09:29:55.687261] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:07.340 09:29:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 71355 00:11:07.599 [2024-11-15 09:29:56.058979] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:08.977 09:29:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jIjTZlvRvY 00:11:08.977 09:29:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:08.977 09:29:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:08.977 09:29:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:08.977 09:29:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:08.977 09:29:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:08.977 09:29:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:08.977 ************************************ 00:11:08.977 END TEST raid_read_error_test 00:11:08.977 ************************************ 00:11:08.977 09:29:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:08.977 00:11:08.977 real 0m5.016s 00:11:08.977 user 0m5.823s 00:11:08.977 sys 0m0.762s 00:11:08.977 09:29:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:08.977 09:29:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.977 09:29:57 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:08.977 09:29:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:08.977 09:29:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:08.977 09:29:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:08.977 ************************************ 00:11:08.977 START TEST raid_write_error_test 00:11:08.977 ************************************ 00:11:08.977 09:29:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 write 00:11:08.977 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:08.977 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:08.977 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Bf5gMMHVj7 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71507 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71507 00:11:09.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 71507 ']' 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:09.238 09:29:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.238 [2024-11-15 09:29:57.555645] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:11:09.238 [2024-11-15 09:29:57.555815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71507 ] 00:11:09.498 [2024-11-15 09:29:57.737593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.498 [2024-11-15 09:29:57.879415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.757 [2024-11-15 09:29:58.120398] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.757 [2024-11-15 09:29:58.120447] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.016 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:10.016 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:10.016 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.016 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:10.016 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.016 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.325 BaseBdev1_malloc 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.325 true 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.325 [2024-11-15 09:29:58.515059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:10.325 [2024-11-15 09:29:58.515146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.325 [2024-11-15 09:29:58.515172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:10.325 [2024-11-15 09:29:58.515184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.325 [2024-11-15 09:29:58.518106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.325 [2024-11-15 09:29:58.518156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:10.325 BaseBdev1 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.325 BaseBdev2_malloc 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.325 true 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.325 [2024-11-15 09:29:58.588243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:10.325 [2024-11-15 09:29:58.588320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.325 [2024-11-15 09:29:58.588343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:10.325 [2024-11-15 09:29:58.588357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.325 [2024-11-15 09:29:58.591063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.325 [2024-11-15 09:29:58.591104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:10.325 BaseBdev2 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.325 BaseBdev3_malloc 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.325 true 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.325 [2024-11-15 09:29:58.674963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:10.325 [2024-11-15 09:29:58.675082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.325 [2024-11-15 09:29:58.675106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:10.325 [2024-11-15 09:29:58.675117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.325 [2024-11-15 09:29:58.677614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.325 [2024-11-15 09:29:58.677652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:10.325 BaseBdev3 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.325 BaseBdev4_malloc 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.325 true 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.325 [2024-11-15 09:29:58.749537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:10.325 [2024-11-15 09:29:58.749619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.325 [2024-11-15 09:29:58.749647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:10.325 [2024-11-15 09:29:58.749660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.325 [2024-11-15 09:29:58.752437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.325 [2024-11-15 09:29:58.752538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:10.325 BaseBdev4 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.325 [2024-11-15 09:29:58.761586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.325 [2024-11-15 09:29:58.763788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:10.325 [2024-11-15 09:29:58.763958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:10.325 [2024-11-15 09:29:58.764065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:10.325 [2024-11-15 09:29:58.764343] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:10.325 [2024-11-15 09:29:58.764364] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:10.325 [2024-11-15 09:29:58.764694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:10.325 [2024-11-15 09:29:58.764907] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:10.325 [2024-11-15 09:29:58.764922] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:10.325 [2024-11-15 09:29:58.765127] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.325 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.326 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.326 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.326 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.326 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.585 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.585 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.585 "name": "raid_bdev1", 00:11:10.585 "uuid": "45913cd1-2338-49e9-9c47-ae9aff5b3e29", 00:11:10.585 "strip_size_kb": 64, 00:11:10.585 "state": "online", 00:11:10.585 "raid_level": "raid0", 00:11:10.585 "superblock": true, 00:11:10.585 "num_base_bdevs": 4, 00:11:10.585 "num_base_bdevs_discovered": 4, 00:11:10.585 "num_base_bdevs_operational": 4, 00:11:10.585 "base_bdevs_list": [ 00:11:10.585 { 00:11:10.585 "name": "BaseBdev1", 00:11:10.585 "uuid": "a8b96e45-ca74-5594-b10e-d74774179440", 00:11:10.585 "is_configured": true, 00:11:10.585 "data_offset": 2048, 00:11:10.585 "data_size": 63488 00:11:10.585 }, 00:11:10.585 { 00:11:10.585 "name": "BaseBdev2", 00:11:10.585 "uuid": "cd2d8ead-5716-579a-95e6-5dd3ab4c5512", 00:11:10.585 "is_configured": true, 00:11:10.585 "data_offset": 2048, 00:11:10.585 "data_size": 63488 00:11:10.585 }, 00:11:10.585 { 00:11:10.585 "name": "BaseBdev3", 00:11:10.585 "uuid": "aac972bf-2a2b-5902-87cf-c9bd1fd9b39e", 00:11:10.585 "is_configured": true, 00:11:10.585 "data_offset": 2048, 00:11:10.585 "data_size": 63488 00:11:10.585 }, 00:11:10.585 { 00:11:10.585 "name": "BaseBdev4", 00:11:10.585 "uuid": "e5bed160-166e-5406-bf6c-807172580dbc", 00:11:10.585 "is_configured": true, 00:11:10.585 "data_offset": 2048, 00:11:10.585 "data_size": 63488 00:11:10.585 } 00:11:10.585 ] 00:11:10.585 }' 00:11:10.585 09:29:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.585 09:29:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.844 09:29:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:10.844 09:29:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:11.102 [2024-11-15 09:29:59.330145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:12.039 09:30:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:12.039 09:30:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.039 09:30:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.039 09:30:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.039 09:30:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:12.039 09:30:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:12.039 09:30:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:12.039 09:30:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:12.039 09:30:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.039 09:30:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.039 09:30:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:12.039 09:30:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.039 09:30:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.039 09:30:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.039 09:30:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.039 09:30:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.039 09:30:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.039 09:30:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.039 09:30:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.039 09:30:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.039 09:30:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.039 09:30:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.039 09:30:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.039 "name": "raid_bdev1", 00:11:12.039 "uuid": "45913cd1-2338-49e9-9c47-ae9aff5b3e29", 00:11:12.039 "strip_size_kb": 64, 00:11:12.039 "state": "online", 00:11:12.039 "raid_level": "raid0", 00:11:12.039 "superblock": true, 00:11:12.039 "num_base_bdevs": 4, 00:11:12.039 "num_base_bdevs_discovered": 4, 00:11:12.039 "num_base_bdevs_operational": 4, 00:11:12.039 "base_bdevs_list": [ 00:11:12.039 { 00:11:12.039 "name": "BaseBdev1", 00:11:12.039 "uuid": "a8b96e45-ca74-5594-b10e-d74774179440", 00:11:12.039 "is_configured": true, 00:11:12.039 "data_offset": 2048, 00:11:12.039 "data_size": 63488 00:11:12.039 }, 00:11:12.039 { 00:11:12.039 "name": "BaseBdev2", 00:11:12.039 "uuid": "cd2d8ead-5716-579a-95e6-5dd3ab4c5512", 00:11:12.039 "is_configured": true, 00:11:12.039 "data_offset": 2048, 00:11:12.039 "data_size": 63488 00:11:12.039 }, 00:11:12.039 { 00:11:12.039 "name": "BaseBdev3", 00:11:12.039 "uuid": "aac972bf-2a2b-5902-87cf-c9bd1fd9b39e", 00:11:12.039 "is_configured": true, 00:11:12.039 "data_offset": 2048, 00:11:12.039 "data_size": 63488 00:11:12.039 }, 00:11:12.039 { 00:11:12.039 "name": "BaseBdev4", 00:11:12.039 "uuid": "e5bed160-166e-5406-bf6c-807172580dbc", 00:11:12.039 "is_configured": true, 00:11:12.039 "data_offset": 2048, 00:11:12.039 "data_size": 63488 00:11:12.039 } 00:11:12.039 ] 00:11:12.039 }' 00:11:12.039 09:30:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.039 09:30:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.299 09:30:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:12.299 09:30:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.299 09:30:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.299 [2024-11-15 09:30:00.732122] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:12.299 [2024-11-15 09:30:00.732164] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:12.299 [2024-11-15 09:30:00.735296] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.299 [2024-11-15 09:30:00.735380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.299 [2024-11-15 09:30:00.735433] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:12.299 [2024-11-15 09:30:00.735447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:12.299 { 00:11:12.299 "results": [ 00:11:12.299 { 00:11:12.299 "job": "raid_bdev1", 00:11:12.299 "core_mask": "0x1", 00:11:12.299 "workload": "randrw", 00:11:12.299 "percentage": 50, 00:11:12.299 "status": "finished", 00:11:12.299 "queue_depth": 1, 00:11:12.299 "io_size": 131072, 00:11:12.299 "runtime": 1.402353, 00:11:12.299 "iops": 12593.833364352628, 00:11:12.299 "mibps": 1574.2291705440784, 00:11:12.299 "io_failed": 1, 00:11:12.299 "io_timeout": 0, 00:11:12.299 "avg_latency_us": 111.95490439346506, 00:11:12.299 "min_latency_us": 26.494323144104804, 00:11:12.299 "max_latency_us": 1602.6270742358079 00:11:12.299 } 00:11:12.299 ], 00:11:12.299 "core_count": 1 00:11:12.299 } 00:11:12.299 09:30:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.299 09:30:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71507 00:11:12.299 09:30:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 71507 ']' 00:11:12.299 09:30:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 71507 00:11:12.299 09:30:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:11:12.299 09:30:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:12.299 09:30:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71507 00:11:12.558 09:30:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:12.558 09:30:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:12.558 09:30:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71507' 00:11:12.558 killing process with pid 71507 00:11:12.558 09:30:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 71507 00:11:12.558 [2024-11-15 09:30:00.772588] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:12.558 09:30:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 71507 00:11:12.817 [2024-11-15 09:30:01.147335] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:14.262 09:30:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Bf5gMMHVj7 00:11:14.262 09:30:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:14.262 09:30:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:14.262 09:30:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:14.262 09:30:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:14.262 09:30:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:14.262 09:30:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:14.262 ************************************ 00:11:14.262 END TEST raid_write_error_test 00:11:14.262 ************************************ 00:11:14.262 09:30:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:14.262 00:11:14.262 real 0m5.023s 00:11:14.262 user 0m5.809s 00:11:14.262 sys 0m0.762s 00:11:14.262 09:30:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:14.262 09:30:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.262 09:30:02 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:14.262 09:30:02 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:14.262 09:30:02 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:14.262 09:30:02 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:14.262 09:30:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:14.262 ************************************ 00:11:14.262 START TEST raid_state_function_test 00:11:14.262 ************************************ 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 false 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71651 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71651' 00:11:14.262 Process raid pid: 71651 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71651 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 71651 ']' 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:14.262 09:30:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.262 [2024-11-15 09:30:02.637636] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:11:14.262 [2024-11-15 09:30:02.637913] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.521 [2024-11-15 09:30:02.805233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.521 [2024-11-15 09:30:02.947747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.780 [2024-11-15 09:30:03.184978] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.780 [2024-11-15 09:30:03.185121] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.039 09:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:15.039 09:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:11:15.039 09:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:15.039 09:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.039 09:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.039 [2024-11-15 09:30:03.482197] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:15.039 [2024-11-15 09:30:03.482266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:15.039 [2024-11-15 09:30:03.482279] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:15.039 [2024-11-15 09:30:03.482290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:15.039 [2024-11-15 09:30:03.482305] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:15.039 [2024-11-15 09:30:03.482316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:15.039 [2024-11-15 09:30:03.482324] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:15.039 [2024-11-15 09:30:03.482334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:15.039 09:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.039 09:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:15.039 09:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.039 09:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.039 09:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.039 09:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.039 09:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.039 09:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.039 09:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.039 09:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.039 09:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.039 09:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.039 09:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.039 09:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.039 09:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.298 09:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.298 09:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.298 "name": "Existed_Raid", 00:11:15.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.298 "strip_size_kb": 64, 00:11:15.298 "state": "configuring", 00:11:15.298 "raid_level": "concat", 00:11:15.298 "superblock": false, 00:11:15.298 "num_base_bdevs": 4, 00:11:15.298 "num_base_bdevs_discovered": 0, 00:11:15.298 "num_base_bdevs_operational": 4, 00:11:15.298 "base_bdevs_list": [ 00:11:15.298 { 00:11:15.298 "name": "BaseBdev1", 00:11:15.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.298 "is_configured": false, 00:11:15.298 "data_offset": 0, 00:11:15.298 "data_size": 0 00:11:15.298 }, 00:11:15.298 { 00:11:15.298 "name": "BaseBdev2", 00:11:15.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.298 "is_configured": false, 00:11:15.298 "data_offset": 0, 00:11:15.298 "data_size": 0 00:11:15.298 }, 00:11:15.298 { 00:11:15.298 "name": "BaseBdev3", 00:11:15.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.298 "is_configured": false, 00:11:15.298 "data_offset": 0, 00:11:15.298 "data_size": 0 00:11:15.298 }, 00:11:15.298 { 00:11:15.298 "name": "BaseBdev4", 00:11:15.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.298 "is_configured": false, 00:11:15.298 "data_offset": 0, 00:11:15.298 "data_size": 0 00:11:15.298 } 00:11:15.298 ] 00:11:15.298 }' 00:11:15.298 09:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.298 09:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.560 09:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:15.560 09:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.560 09:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.560 [2024-11-15 09:30:03.937373] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:15.560 [2024-11-15 09:30:03.937483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:15.560 09:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.560 09:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:15.560 09:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.560 09:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.560 [2024-11-15 09:30:03.949394] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:15.560 [2024-11-15 09:30:03.949528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:15.560 [2024-11-15 09:30:03.949580] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:15.560 [2024-11-15 09:30:03.949605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:15.560 [2024-11-15 09:30:03.949630] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:15.560 [2024-11-15 09:30:03.949673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:15.560 [2024-11-15 09:30:03.949704] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:15.560 [2024-11-15 09:30:03.949734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:15.560 09:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.560 09:30:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:15.560 09:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.560 09:30:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.560 [2024-11-15 09:30:04.003952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:15.560 BaseBdev1 00:11:15.560 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.560 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:15.560 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:15.560 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:15.560 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:15.560 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:15.560 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:15.560 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:15.560 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.560 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.560 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.560 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:15.560 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.560 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.819 [ 00:11:15.820 { 00:11:15.820 "name": "BaseBdev1", 00:11:15.820 "aliases": [ 00:11:15.820 "14f15791-586d-43cc-b5fb-91d21052f73d" 00:11:15.820 ], 00:11:15.820 "product_name": "Malloc disk", 00:11:15.820 "block_size": 512, 00:11:15.820 "num_blocks": 65536, 00:11:15.820 "uuid": "14f15791-586d-43cc-b5fb-91d21052f73d", 00:11:15.820 "assigned_rate_limits": { 00:11:15.820 "rw_ios_per_sec": 0, 00:11:15.820 "rw_mbytes_per_sec": 0, 00:11:15.820 "r_mbytes_per_sec": 0, 00:11:15.820 "w_mbytes_per_sec": 0 00:11:15.820 }, 00:11:15.820 "claimed": true, 00:11:15.820 "claim_type": "exclusive_write", 00:11:15.820 "zoned": false, 00:11:15.820 "supported_io_types": { 00:11:15.820 "read": true, 00:11:15.820 "write": true, 00:11:15.820 "unmap": true, 00:11:15.820 "flush": true, 00:11:15.820 "reset": true, 00:11:15.820 "nvme_admin": false, 00:11:15.820 "nvme_io": false, 00:11:15.820 "nvme_io_md": false, 00:11:15.820 "write_zeroes": true, 00:11:15.820 "zcopy": true, 00:11:15.820 "get_zone_info": false, 00:11:15.820 "zone_management": false, 00:11:15.820 "zone_append": false, 00:11:15.820 "compare": false, 00:11:15.820 "compare_and_write": false, 00:11:15.820 "abort": true, 00:11:15.820 "seek_hole": false, 00:11:15.820 "seek_data": false, 00:11:15.820 "copy": true, 00:11:15.820 "nvme_iov_md": false 00:11:15.820 }, 00:11:15.820 "memory_domains": [ 00:11:15.820 { 00:11:15.820 "dma_device_id": "system", 00:11:15.820 "dma_device_type": 1 00:11:15.820 }, 00:11:15.820 { 00:11:15.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.820 "dma_device_type": 2 00:11:15.820 } 00:11:15.820 ], 00:11:15.820 "driver_specific": {} 00:11:15.820 } 00:11:15.820 ] 00:11:15.820 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.820 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:15.820 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:15.820 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.820 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.820 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.820 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.820 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.820 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.820 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.820 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.820 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.820 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.820 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.820 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.820 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.820 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.820 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.820 "name": "Existed_Raid", 00:11:15.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.820 "strip_size_kb": 64, 00:11:15.820 "state": "configuring", 00:11:15.820 "raid_level": "concat", 00:11:15.820 "superblock": false, 00:11:15.820 "num_base_bdevs": 4, 00:11:15.820 "num_base_bdevs_discovered": 1, 00:11:15.820 "num_base_bdevs_operational": 4, 00:11:15.820 "base_bdevs_list": [ 00:11:15.820 { 00:11:15.820 "name": "BaseBdev1", 00:11:15.820 "uuid": "14f15791-586d-43cc-b5fb-91d21052f73d", 00:11:15.820 "is_configured": true, 00:11:15.820 "data_offset": 0, 00:11:15.820 "data_size": 65536 00:11:15.820 }, 00:11:15.820 { 00:11:15.820 "name": "BaseBdev2", 00:11:15.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.820 "is_configured": false, 00:11:15.820 "data_offset": 0, 00:11:15.820 "data_size": 0 00:11:15.820 }, 00:11:15.820 { 00:11:15.820 "name": "BaseBdev3", 00:11:15.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.820 "is_configured": false, 00:11:15.820 "data_offset": 0, 00:11:15.820 "data_size": 0 00:11:15.820 }, 00:11:15.820 { 00:11:15.820 "name": "BaseBdev4", 00:11:15.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.820 "is_configured": false, 00:11:15.820 "data_offset": 0, 00:11:15.820 "data_size": 0 00:11:15.820 } 00:11:15.820 ] 00:11:15.820 }' 00:11:15.820 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.820 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.079 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:16.079 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.079 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.079 [2024-11-15 09:30:04.507165] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:16.079 [2024-11-15 09:30:04.507315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:16.079 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.079 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:16.079 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.079 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.079 [2024-11-15 09:30:04.519174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:16.079 [2024-11-15 09:30:04.521354] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:16.079 [2024-11-15 09:30:04.521400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:16.079 [2024-11-15 09:30:04.521412] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:16.079 [2024-11-15 09:30:04.521422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:16.079 [2024-11-15 09:30:04.521429] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:16.079 [2024-11-15 09:30:04.521437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:16.079 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.079 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:16.079 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:16.079 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:16.079 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.079 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.079 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.079 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.079 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.079 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.079 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.079 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.079 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.079 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.079 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.079 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.079 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.358 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.358 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.358 "name": "Existed_Raid", 00:11:16.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.358 "strip_size_kb": 64, 00:11:16.358 "state": "configuring", 00:11:16.358 "raid_level": "concat", 00:11:16.358 "superblock": false, 00:11:16.358 "num_base_bdevs": 4, 00:11:16.358 "num_base_bdevs_discovered": 1, 00:11:16.358 "num_base_bdevs_operational": 4, 00:11:16.358 "base_bdevs_list": [ 00:11:16.358 { 00:11:16.358 "name": "BaseBdev1", 00:11:16.358 "uuid": "14f15791-586d-43cc-b5fb-91d21052f73d", 00:11:16.358 "is_configured": true, 00:11:16.358 "data_offset": 0, 00:11:16.358 "data_size": 65536 00:11:16.358 }, 00:11:16.358 { 00:11:16.358 "name": "BaseBdev2", 00:11:16.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.358 "is_configured": false, 00:11:16.358 "data_offset": 0, 00:11:16.358 "data_size": 0 00:11:16.358 }, 00:11:16.358 { 00:11:16.358 "name": "BaseBdev3", 00:11:16.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.358 "is_configured": false, 00:11:16.358 "data_offset": 0, 00:11:16.358 "data_size": 0 00:11:16.358 }, 00:11:16.358 { 00:11:16.358 "name": "BaseBdev4", 00:11:16.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.358 "is_configured": false, 00:11:16.358 "data_offset": 0, 00:11:16.358 "data_size": 0 00:11:16.358 } 00:11:16.358 ] 00:11:16.358 }' 00:11:16.358 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.358 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.636 09:30:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:16.636 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.636 09:30:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.636 [2024-11-15 09:30:05.015427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:16.636 BaseBdev2 00:11:16.636 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.636 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:16.636 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:16.636 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:16.636 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:16.636 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:16.636 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:16.636 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:16.636 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.636 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.636 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.636 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:16.636 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.636 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.636 [ 00:11:16.636 { 00:11:16.636 "name": "BaseBdev2", 00:11:16.636 "aliases": [ 00:11:16.636 "c6d203b4-89f6-4856-afd8-fa3b5ed0cc7b" 00:11:16.636 ], 00:11:16.636 "product_name": "Malloc disk", 00:11:16.636 "block_size": 512, 00:11:16.636 "num_blocks": 65536, 00:11:16.636 "uuid": "c6d203b4-89f6-4856-afd8-fa3b5ed0cc7b", 00:11:16.636 "assigned_rate_limits": { 00:11:16.636 "rw_ios_per_sec": 0, 00:11:16.636 "rw_mbytes_per_sec": 0, 00:11:16.636 "r_mbytes_per_sec": 0, 00:11:16.636 "w_mbytes_per_sec": 0 00:11:16.636 }, 00:11:16.636 "claimed": true, 00:11:16.636 "claim_type": "exclusive_write", 00:11:16.636 "zoned": false, 00:11:16.636 "supported_io_types": { 00:11:16.636 "read": true, 00:11:16.636 "write": true, 00:11:16.636 "unmap": true, 00:11:16.636 "flush": true, 00:11:16.636 "reset": true, 00:11:16.636 "nvme_admin": false, 00:11:16.636 "nvme_io": false, 00:11:16.636 "nvme_io_md": false, 00:11:16.637 "write_zeroes": true, 00:11:16.637 "zcopy": true, 00:11:16.637 "get_zone_info": false, 00:11:16.637 "zone_management": false, 00:11:16.637 "zone_append": false, 00:11:16.637 "compare": false, 00:11:16.637 "compare_and_write": false, 00:11:16.637 "abort": true, 00:11:16.637 "seek_hole": false, 00:11:16.637 "seek_data": false, 00:11:16.637 "copy": true, 00:11:16.637 "nvme_iov_md": false 00:11:16.637 }, 00:11:16.637 "memory_domains": [ 00:11:16.637 { 00:11:16.637 "dma_device_id": "system", 00:11:16.637 "dma_device_type": 1 00:11:16.637 }, 00:11:16.637 { 00:11:16.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.637 "dma_device_type": 2 00:11:16.637 } 00:11:16.637 ], 00:11:16.637 "driver_specific": {} 00:11:16.637 } 00:11:16.637 ] 00:11:16.637 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.637 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:16.637 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:16.637 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:16.637 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:16.637 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.637 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.637 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.637 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.637 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.637 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.637 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.637 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.637 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.637 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.637 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.637 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.637 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.637 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.895 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.895 "name": "Existed_Raid", 00:11:16.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.895 "strip_size_kb": 64, 00:11:16.895 "state": "configuring", 00:11:16.895 "raid_level": "concat", 00:11:16.895 "superblock": false, 00:11:16.895 "num_base_bdevs": 4, 00:11:16.895 "num_base_bdevs_discovered": 2, 00:11:16.895 "num_base_bdevs_operational": 4, 00:11:16.895 "base_bdevs_list": [ 00:11:16.895 { 00:11:16.895 "name": "BaseBdev1", 00:11:16.895 "uuid": "14f15791-586d-43cc-b5fb-91d21052f73d", 00:11:16.895 "is_configured": true, 00:11:16.895 "data_offset": 0, 00:11:16.895 "data_size": 65536 00:11:16.895 }, 00:11:16.895 { 00:11:16.895 "name": "BaseBdev2", 00:11:16.895 "uuid": "c6d203b4-89f6-4856-afd8-fa3b5ed0cc7b", 00:11:16.895 "is_configured": true, 00:11:16.895 "data_offset": 0, 00:11:16.895 "data_size": 65536 00:11:16.895 }, 00:11:16.895 { 00:11:16.895 "name": "BaseBdev3", 00:11:16.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.895 "is_configured": false, 00:11:16.895 "data_offset": 0, 00:11:16.895 "data_size": 0 00:11:16.895 }, 00:11:16.895 { 00:11:16.895 "name": "BaseBdev4", 00:11:16.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.895 "is_configured": false, 00:11:16.895 "data_offset": 0, 00:11:16.895 "data_size": 0 00:11:16.895 } 00:11:16.895 ] 00:11:16.895 }' 00:11:16.895 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.895 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.155 [2024-11-15 09:30:05.551338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.155 BaseBdev3 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.155 [ 00:11:17.155 { 00:11:17.155 "name": "BaseBdev3", 00:11:17.155 "aliases": [ 00:11:17.155 "39d83bca-0caf-44e9-aa56-52f60398a45a" 00:11:17.155 ], 00:11:17.155 "product_name": "Malloc disk", 00:11:17.155 "block_size": 512, 00:11:17.155 "num_blocks": 65536, 00:11:17.155 "uuid": "39d83bca-0caf-44e9-aa56-52f60398a45a", 00:11:17.155 "assigned_rate_limits": { 00:11:17.155 "rw_ios_per_sec": 0, 00:11:17.155 "rw_mbytes_per_sec": 0, 00:11:17.155 "r_mbytes_per_sec": 0, 00:11:17.155 "w_mbytes_per_sec": 0 00:11:17.155 }, 00:11:17.155 "claimed": true, 00:11:17.155 "claim_type": "exclusive_write", 00:11:17.155 "zoned": false, 00:11:17.155 "supported_io_types": { 00:11:17.155 "read": true, 00:11:17.155 "write": true, 00:11:17.155 "unmap": true, 00:11:17.155 "flush": true, 00:11:17.155 "reset": true, 00:11:17.155 "nvme_admin": false, 00:11:17.155 "nvme_io": false, 00:11:17.155 "nvme_io_md": false, 00:11:17.155 "write_zeroes": true, 00:11:17.155 "zcopy": true, 00:11:17.155 "get_zone_info": false, 00:11:17.155 "zone_management": false, 00:11:17.155 "zone_append": false, 00:11:17.155 "compare": false, 00:11:17.155 "compare_and_write": false, 00:11:17.155 "abort": true, 00:11:17.155 "seek_hole": false, 00:11:17.155 "seek_data": false, 00:11:17.155 "copy": true, 00:11:17.155 "nvme_iov_md": false 00:11:17.155 }, 00:11:17.155 "memory_domains": [ 00:11:17.155 { 00:11:17.155 "dma_device_id": "system", 00:11:17.155 "dma_device_type": 1 00:11:17.155 }, 00:11:17.155 { 00:11:17.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.155 "dma_device_type": 2 00:11:17.155 } 00:11:17.155 ], 00:11:17.155 "driver_specific": {} 00:11:17.155 } 00:11:17.155 ] 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.155 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.156 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.156 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.156 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.156 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.156 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.156 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.156 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.156 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.156 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.156 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.414 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.414 "name": "Existed_Raid", 00:11:17.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.414 "strip_size_kb": 64, 00:11:17.414 "state": "configuring", 00:11:17.414 "raid_level": "concat", 00:11:17.414 "superblock": false, 00:11:17.414 "num_base_bdevs": 4, 00:11:17.414 "num_base_bdevs_discovered": 3, 00:11:17.414 "num_base_bdevs_operational": 4, 00:11:17.414 "base_bdevs_list": [ 00:11:17.414 { 00:11:17.414 "name": "BaseBdev1", 00:11:17.414 "uuid": "14f15791-586d-43cc-b5fb-91d21052f73d", 00:11:17.414 "is_configured": true, 00:11:17.414 "data_offset": 0, 00:11:17.414 "data_size": 65536 00:11:17.414 }, 00:11:17.414 { 00:11:17.414 "name": "BaseBdev2", 00:11:17.414 "uuid": "c6d203b4-89f6-4856-afd8-fa3b5ed0cc7b", 00:11:17.414 "is_configured": true, 00:11:17.414 "data_offset": 0, 00:11:17.414 "data_size": 65536 00:11:17.414 }, 00:11:17.414 { 00:11:17.414 "name": "BaseBdev3", 00:11:17.414 "uuid": "39d83bca-0caf-44e9-aa56-52f60398a45a", 00:11:17.414 "is_configured": true, 00:11:17.414 "data_offset": 0, 00:11:17.414 "data_size": 65536 00:11:17.414 }, 00:11:17.414 { 00:11:17.414 "name": "BaseBdev4", 00:11:17.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.414 "is_configured": false, 00:11:17.414 "data_offset": 0, 00:11:17.414 "data_size": 0 00:11:17.414 } 00:11:17.414 ] 00:11:17.414 }' 00:11:17.414 09:30:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.414 09:30:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.674 [2024-11-15 09:30:06.087182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:17.674 [2024-11-15 09:30:06.087245] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:17.674 [2024-11-15 09:30:06.087255] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:17.674 [2024-11-15 09:30:06.087570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:17.674 [2024-11-15 09:30:06.087765] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:17.674 [2024-11-15 09:30:06.087780] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:17.674 [2024-11-15 09:30:06.088168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.674 BaseBdev4 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.674 [ 00:11:17.674 { 00:11:17.674 "name": "BaseBdev4", 00:11:17.674 "aliases": [ 00:11:17.674 "95fa7e53-cf42-4611-ba82-a20c6736aee7" 00:11:17.674 ], 00:11:17.674 "product_name": "Malloc disk", 00:11:17.674 "block_size": 512, 00:11:17.674 "num_blocks": 65536, 00:11:17.674 "uuid": "95fa7e53-cf42-4611-ba82-a20c6736aee7", 00:11:17.674 "assigned_rate_limits": { 00:11:17.674 "rw_ios_per_sec": 0, 00:11:17.674 "rw_mbytes_per_sec": 0, 00:11:17.674 "r_mbytes_per_sec": 0, 00:11:17.674 "w_mbytes_per_sec": 0 00:11:17.674 }, 00:11:17.674 "claimed": true, 00:11:17.674 "claim_type": "exclusive_write", 00:11:17.674 "zoned": false, 00:11:17.674 "supported_io_types": { 00:11:17.674 "read": true, 00:11:17.674 "write": true, 00:11:17.674 "unmap": true, 00:11:17.674 "flush": true, 00:11:17.674 "reset": true, 00:11:17.674 "nvme_admin": false, 00:11:17.674 "nvme_io": false, 00:11:17.674 "nvme_io_md": false, 00:11:17.674 "write_zeroes": true, 00:11:17.674 "zcopy": true, 00:11:17.674 "get_zone_info": false, 00:11:17.674 "zone_management": false, 00:11:17.674 "zone_append": false, 00:11:17.674 "compare": false, 00:11:17.674 "compare_and_write": false, 00:11:17.674 "abort": true, 00:11:17.674 "seek_hole": false, 00:11:17.674 "seek_data": false, 00:11:17.674 "copy": true, 00:11:17.674 "nvme_iov_md": false 00:11:17.674 }, 00:11:17.674 "memory_domains": [ 00:11:17.674 { 00:11:17.674 "dma_device_id": "system", 00:11:17.674 "dma_device_type": 1 00:11:17.674 }, 00:11:17.674 { 00:11:17.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.674 "dma_device_type": 2 00:11:17.674 } 00:11:17.674 ], 00:11:17.674 "driver_specific": {} 00:11:17.674 } 00:11:17.674 ] 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.674 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.934 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.934 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.934 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.934 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.934 "name": "Existed_Raid", 00:11:17.934 "uuid": "f16df5ff-834b-455c-a6db-f794d2549c8e", 00:11:17.934 "strip_size_kb": 64, 00:11:17.934 "state": "online", 00:11:17.934 "raid_level": "concat", 00:11:17.934 "superblock": false, 00:11:17.934 "num_base_bdevs": 4, 00:11:17.934 "num_base_bdevs_discovered": 4, 00:11:17.934 "num_base_bdevs_operational": 4, 00:11:17.934 "base_bdevs_list": [ 00:11:17.934 { 00:11:17.934 "name": "BaseBdev1", 00:11:17.934 "uuid": "14f15791-586d-43cc-b5fb-91d21052f73d", 00:11:17.934 "is_configured": true, 00:11:17.934 "data_offset": 0, 00:11:17.934 "data_size": 65536 00:11:17.934 }, 00:11:17.934 { 00:11:17.934 "name": "BaseBdev2", 00:11:17.934 "uuid": "c6d203b4-89f6-4856-afd8-fa3b5ed0cc7b", 00:11:17.934 "is_configured": true, 00:11:17.934 "data_offset": 0, 00:11:17.934 "data_size": 65536 00:11:17.934 }, 00:11:17.934 { 00:11:17.934 "name": "BaseBdev3", 00:11:17.934 "uuid": "39d83bca-0caf-44e9-aa56-52f60398a45a", 00:11:17.934 "is_configured": true, 00:11:17.934 "data_offset": 0, 00:11:17.934 "data_size": 65536 00:11:17.934 }, 00:11:17.934 { 00:11:17.934 "name": "BaseBdev4", 00:11:17.934 "uuid": "95fa7e53-cf42-4611-ba82-a20c6736aee7", 00:11:17.934 "is_configured": true, 00:11:17.934 "data_offset": 0, 00:11:17.934 "data_size": 65536 00:11:17.934 } 00:11:17.934 ] 00:11:17.934 }' 00:11:17.934 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.934 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.193 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:18.193 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:18.193 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:18.193 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:18.193 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:18.193 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:18.193 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:18.193 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:18.193 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.193 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.193 [2024-11-15 09:30:06.570838] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.193 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.193 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:18.193 "name": "Existed_Raid", 00:11:18.193 "aliases": [ 00:11:18.193 "f16df5ff-834b-455c-a6db-f794d2549c8e" 00:11:18.193 ], 00:11:18.193 "product_name": "Raid Volume", 00:11:18.193 "block_size": 512, 00:11:18.193 "num_blocks": 262144, 00:11:18.193 "uuid": "f16df5ff-834b-455c-a6db-f794d2549c8e", 00:11:18.193 "assigned_rate_limits": { 00:11:18.193 "rw_ios_per_sec": 0, 00:11:18.193 "rw_mbytes_per_sec": 0, 00:11:18.193 "r_mbytes_per_sec": 0, 00:11:18.193 "w_mbytes_per_sec": 0 00:11:18.193 }, 00:11:18.193 "claimed": false, 00:11:18.193 "zoned": false, 00:11:18.193 "supported_io_types": { 00:11:18.193 "read": true, 00:11:18.193 "write": true, 00:11:18.193 "unmap": true, 00:11:18.193 "flush": true, 00:11:18.193 "reset": true, 00:11:18.193 "nvme_admin": false, 00:11:18.193 "nvme_io": false, 00:11:18.193 "nvme_io_md": false, 00:11:18.193 "write_zeroes": true, 00:11:18.193 "zcopy": false, 00:11:18.193 "get_zone_info": false, 00:11:18.193 "zone_management": false, 00:11:18.193 "zone_append": false, 00:11:18.193 "compare": false, 00:11:18.193 "compare_and_write": false, 00:11:18.194 "abort": false, 00:11:18.194 "seek_hole": false, 00:11:18.194 "seek_data": false, 00:11:18.194 "copy": false, 00:11:18.194 "nvme_iov_md": false 00:11:18.194 }, 00:11:18.194 "memory_domains": [ 00:11:18.194 { 00:11:18.194 "dma_device_id": "system", 00:11:18.194 "dma_device_type": 1 00:11:18.194 }, 00:11:18.194 { 00:11:18.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.194 "dma_device_type": 2 00:11:18.194 }, 00:11:18.194 { 00:11:18.194 "dma_device_id": "system", 00:11:18.194 "dma_device_type": 1 00:11:18.194 }, 00:11:18.194 { 00:11:18.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.194 "dma_device_type": 2 00:11:18.194 }, 00:11:18.194 { 00:11:18.194 "dma_device_id": "system", 00:11:18.194 "dma_device_type": 1 00:11:18.194 }, 00:11:18.194 { 00:11:18.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.194 "dma_device_type": 2 00:11:18.194 }, 00:11:18.194 { 00:11:18.194 "dma_device_id": "system", 00:11:18.194 "dma_device_type": 1 00:11:18.194 }, 00:11:18.194 { 00:11:18.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.194 "dma_device_type": 2 00:11:18.194 } 00:11:18.194 ], 00:11:18.194 "driver_specific": { 00:11:18.194 "raid": { 00:11:18.194 "uuid": "f16df5ff-834b-455c-a6db-f794d2549c8e", 00:11:18.194 "strip_size_kb": 64, 00:11:18.194 "state": "online", 00:11:18.194 "raid_level": "concat", 00:11:18.194 "superblock": false, 00:11:18.194 "num_base_bdevs": 4, 00:11:18.194 "num_base_bdevs_discovered": 4, 00:11:18.194 "num_base_bdevs_operational": 4, 00:11:18.194 "base_bdevs_list": [ 00:11:18.194 { 00:11:18.194 "name": "BaseBdev1", 00:11:18.194 "uuid": "14f15791-586d-43cc-b5fb-91d21052f73d", 00:11:18.194 "is_configured": true, 00:11:18.194 "data_offset": 0, 00:11:18.194 "data_size": 65536 00:11:18.194 }, 00:11:18.194 { 00:11:18.194 "name": "BaseBdev2", 00:11:18.194 "uuid": "c6d203b4-89f6-4856-afd8-fa3b5ed0cc7b", 00:11:18.194 "is_configured": true, 00:11:18.194 "data_offset": 0, 00:11:18.194 "data_size": 65536 00:11:18.194 }, 00:11:18.194 { 00:11:18.194 "name": "BaseBdev3", 00:11:18.194 "uuid": "39d83bca-0caf-44e9-aa56-52f60398a45a", 00:11:18.194 "is_configured": true, 00:11:18.194 "data_offset": 0, 00:11:18.194 "data_size": 65536 00:11:18.194 }, 00:11:18.194 { 00:11:18.194 "name": "BaseBdev4", 00:11:18.194 "uuid": "95fa7e53-cf42-4611-ba82-a20c6736aee7", 00:11:18.194 "is_configured": true, 00:11:18.194 "data_offset": 0, 00:11:18.194 "data_size": 65536 00:11:18.194 } 00:11:18.194 ] 00:11:18.194 } 00:11:18.194 } 00:11:18.194 }' 00:11:18.194 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:18.194 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:18.194 BaseBdev2 00:11:18.194 BaseBdev3 00:11:18.194 BaseBdev4' 00:11:18.194 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.453 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:18.453 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.453 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:18.453 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.453 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.453 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.453 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.453 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.453 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.453 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.453 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:18.453 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.453 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.453 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.453 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.453 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.453 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.454 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.454 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.454 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:18.454 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.454 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.454 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.454 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.454 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.454 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.454 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.454 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:18.454 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.454 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.454 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.454 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.454 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.454 09:30:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:18.454 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.454 09:30:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.454 [2024-11-15 09:30:06.914001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:18.454 [2024-11-15 09:30:06.914106] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:18.454 [2024-11-15 09:30:06.914216] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.713 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.713 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:18.714 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:18.714 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:18.714 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:18.714 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:18.714 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:18.714 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.714 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:18.714 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.714 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.714 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.714 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.714 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.714 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.714 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.714 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.714 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.714 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.714 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.714 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.714 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.714 "name": "Existed_Raid", 00:11:18.714 "uuid": "f16df5ff-834b-455c-a6db-f794d2549c8e", 00:11:18.714 "strip_size_kb": 64, 00:11:18.714 "state": "offline", 00:11:18.714 "raid_level": "concat", 00:11:18.714 "superblock": false, 00:11:18.714 "num_base_bdevs": 4, 00:11:18.714 "num_base_bdevs_discovered": 3, 00:11:18.714 "num_base_bdevs_operational": 3, 00:11:18.714 "base_bdevs_list": [ 00:11:18.714 { 00:11:18.714 "name": null, 00:11:18.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.714 "is_configured": false, 00:11:18.714 "data_offset": 0, 00:11:18.714 "data_size": 65536 00:11:18.714 }, 00:11:18.714 { 00:11:18.714 "name": "BaseBdev2", 00:11:18.714 "uuid": "c6d203b4-89f6-4856-afd8-fa3b5ed0cc7b", 00:11:18.714 "is_configured": true, 00:11:18.714 "data_offset": 0, 00:11:18.714 "data_size": 65536 00:11:18.714 }, 00:11:18.714 { 00:11:18.714 "name": "BaseBdev3", 00:11:18.714 "uuid": "39d83bca-0caf-44e9-aa56-52f60398a45a", 00:11:18.714 "is_configured": true, 00:11:18.714 "data_offset": 0, 00:11:18.714 "data_size": 65536 00:11:18.714 }, 00:11:18.714 { 00:11:18.714 "name": "BaseBdev4", 00:11:18.714 "uuid": "95fa7e53-cf42-4611-ba82-a20c6736aee7", 00:11:18.714 "is_configured": true, 00:11:18.714 "data_offset": 0, 00:11:18.714 "data_size": 65536 00:11:18.714 } 00:11:18.714 ] 00:11:18.714 }' 00:11:18.714 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.714 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.002 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:19.002 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:19.002 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.002 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:19.002 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.002 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.286 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.286 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:19.286 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:19.286 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:19.286 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.286 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.286 [2024-11-15 09:30:07.522289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:19.286 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.286 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:19.286 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:19.286 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.286 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:19.286 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.286 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.286 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.286 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:19.286 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:19.286 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:19.286 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.286 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.286 [2024-11-15 09:30:07.691304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.544 [2024-11-15 09:30:07.839418] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:19.544 [2024-11-15 09:30:07.839563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:19.544 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.545 09:30:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.802 BaseBdev2 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.802 [ 00:11:19.802 { 00:11:19.802 "name": "BaseBdev2", 00:11:19.802 "aliases": [ 00:11:19.802 "2732ffa9-de3d-4a01-9823-2abbdd6fb3d5" 00:11:19.802 ], 00:11:19.802 "product_name": "Malloc disk", 00:11:19.802 "block_size": 512, 00:11:19.802 "num_blocks": 65536, 00:11:19.802 "uuid": "2732ffa9-de3d-4a01-9823-2abbdd6fb3d5", 00:11:19.802 "assigned_rate_limits": { 00:11:19.802 "rw_ios_per_sec": 0, 00:11:19.802 "rw_mbytes_per_sec": 0, 00:11:19.802 "r_mbytes_per_sec": 0, 00:11:19.802 "w_mbytes_per_sec": 0 00:11:19.802 }, 00:11:19.802 "claimed": false, 00:11:19.802 "zoned": false, 00:11:19.802 "supported_io_types": { 00:11:19.802 "read": true, 00:11:19.802 "write": true, 00:11:19.802 "unmap": true, 00:11:19.802 "flush": true, 00:11:19.802 "reset": true, 00:11:19.802 "nvme_admin": false, 00:11:19.802 "nvme_io": false, 00:11:19.802 "nvme_io_md": false, 00:11:19.802 "write_zeroes": true, 00:11:19.802 "zcopy": true, 00:11:19.802 "get_zone_info": false, 00:11:19.802 "zone_management": false, 00:11:19.802 "zone_append": false, 00:11:19.802 "compare": false, 00:11:19.802 "compare_and_write": false, 00:11:19.802 "abort": true, 00:11:19.802 "seek_hole": false, 00:11:19.802 "seek_data": false, 00:11:19.802 "copy": true, 00:11:19.802 "nvme_iov_md": false 00:11:19.802 }, 00:11:19.802 "memory_domains": [ 00:11:19.802 { 00:11:19.802 "dma_device_id": "system", 00:11:19.802 "dma_device_type": 1 00:11:19.802 }, 00:11:19.802 { 00:11:19.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.802 "dma_device_type": 2 00:11:19.802 } 00:11:19.802 ], 00:11:19.802 "driver_specific": {} 00:11:19.802 } 00:11:19.802 ] 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.802 BaseBdev3 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.802 [ 00:11:19.802 { 00:11:19.802 "name": "BaseBdev3", 00:11:19.802 "aliases": [ 00:11:19.802 "3f32d2f4-2fdf-4475-9698-668ea8a57c79" 00:11:19.802 ], 00:11:19.802 "product_name": "Malloc disk", 00:11:19.802 "block_size": 512, 00:11:19.802 "num_blocks": 65536, 00:11:19.802 "uuid": "3f32d2f4-2fdf-4475-9698-668ea8a57c79", 00:11:19.802 "assigned_rate_limits": { 00:11:19.802 "rw_ios_per_sec": 0, 00:11:19.802 "rw_mbytes_per_sec": 0, 00:11:19.802 "r_mbytes_per_sec": 0, 00:11:19.802 "w_mbytes_per_sec": 0 00:11:19.802 }, 00:11:19.802 "claimed": false, 00:11:19.802 "zoned": false, 00:11:19.802 "supported_io_types": { 00:11:19.802 "read": true, 00:11:19.802 "write": true, 00:11:19.802 "unmap": true, 00:11:19.802 "flush": true, 00:11:19.802 "reset": true, 00:11:19.802 "nvme_admin": false, 00:11:19.802 "nvme_io": false, 00:11:19.802 "nvme_io_md": false, 00:11:19.802 "write_zeroes": true, 00:11:19.802 "zcopy": true, 00:11:19.802 "get_zone_info": false, 00:11:19.802 "zone_management": false, 00:11:19.802 "zone_append": false, 00:11:19.802 "compare": false, 00:11:19.802 "compare_and_write": false, 00:11:19.802 "abort": true, 00:11:19.802 "seek_hole": false, 00:11:19.802 "seek_data": false, 00:11:19.802 "copy": true, 00:11:19.802 "nvme_iov_md": false 00:11:19.802 }, 00:11:19.802 "memory_domains": [ 00:11:19.802 { 00:11:19.802 "dma_device_id": "system", 00:11:19.802 "dma_device_type": 1 00:11:19.802 }, 00:11:19.802 { 00:11:19.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.802 "dma_device_type": 2 00:11:19.802 } 00:11:19.802 ], 00:11:19.802 "driver_specific": {} 00:11:19.802 } 00:11:19.802 ] 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.802 BaseBdev4 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.802 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.802 [ 00:11:19.802 { 00:11:19.802 "name": "BaseBdev4", 00:11:19.802 "aliases": [ 00:11:19.802 "a0bb272e-118e-4adc-a331-556d18f68c34" 00:11:19.802 ], 00:11:19.802 "product_name": "Malloc disk", 00:11:19.802 "block_size": 512, 00:11:19.802 "num_blocks": 65536, 00:11:19.802 "uuid": "a0bb272e-118e-4adc-a331-556d18f68c34", 00:11:19.802 "assigned_rate_limits": { 00:11:19.802 "rw_ios_per_sec": 0, 00:11:19.802 "rw_mbytes_per_sec": 0, 00:11:19.803 "r_mbytes_per_sec": 0, 00:11:19.803 "w_mbytes_per_sec": 0 00:11:19.803 }, 00:11:19.803 "claimed": false, 00:11:19.803 "zoned": false, 00:11:19.803 "supported_io_types": { 00:11:19.803 "read": true, 00:11:19.803 "write": true, 00:11:19.803 "unmap": true, 00:11:19.803 "flush": true, 00:11:19.803 "reset": true, 00:11:19.803 "nvme_admin": false, 00:11:19.803 "nvme_io": false, 00:11:19.803 "nvme_io_md": false, 00:11:19.803 "write_zeroes": true, 00:11:19.803 "zcopy": true, 00:11:19.803 "get_zone_info": false, 00:11:19.803 "zone_management": false, 00:11:19.803 "zone_append": false, 00:11:19.803 "compare": false, 00:11:19.803 "compare_and_write": false, 00:11:19.803 "abort": true, 00:11:19.803 "seek_hole": false, 00:11:20.060 "seek_data": false, 00:11:20.060 "copy": true, 00:11:20.060 "nvme_iov_md": false 00:11:20.060 }, 00:11:20.060 "memory_domains": [ 00:11:20.060 { 00:11:20.060 "dma_device_id": "system", 00:11:20.060 "dma_device_type": 1 00:11:20.060 }, 00:11:20.060 { 00:11:20.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.060 "dma_device_type": 2 00:11:20.060 } 00:11:20.061 ], 00:11:20.061 "driver_specific": {} 00:11:20.061 } 00:11:20.061 ] 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.061 [2024-11-15 09:30:08.277990] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:20.061 [2024-11-15 09:30:08.278131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:20.061 [2024-11-15 09:30:08.278208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:20.061 [2024-11-15 09:30:08.280488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:20.061 [2024-11-15 09:30:08.280601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.061 "name": "Existed_Raid", 00:11:20.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.061 "strip_size_kb": 64, 00:11:20.061 "state": "configuring", 00:11:20.061 "raid_level": "concat", 00:11:20.061 "superblock": false, 00:11:20.061 "num_base_bdevs": 4, 00:11:20.061 "num_base_bdevs_discovered": 3, 00:11:20.061 "num_base_bdevs_operational": 4, 00:11:20.061 "base_bdevs_list": [ 00:11:20.061 { 00:11:20.061 "name": "BaseBdev1", 00:11:20.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.061 "is_configured": false, 00:11:20.061 "data_offset": 0, 00:11:20.061 "data_size": 0 00:11:20.061 }, 00:11:20.061 { 00:11:20.061 "name": "BaseBdev2", 00:11:20.061 "uuid": "2732ffa9-de3d-4a01-9823-2abbdd6fb3d5", 00:11:20.061 "is_configured": true, 00:11:20.061 "data_offset": 0, 00:11:20.061 "data_size": 65536 00:11:20.061 }, 00:11:20.061 { 00:11:20.061 "name": "BaseBdev3", 00:11:20.061 "uuid": "3f32d2f4-2fdf-4475-9698-668ea8a57c79", 00:11:20.061 "is_configured": true, 00:11:20.061 "data_offset": 0, 00:11:20.061 "data_size": 65536 00:11:20.061 }, 00:11:20.061 { 00:11:20.061 "name": "BaseBdev4", 00:11:20.061 "uuid": "a0bb272e-118e-4adc-a331-556d18f68c34", 00:11:20.061 "is_configured": true, 00:11:20.061 "data_offset": 0, 00:11:20.061 "data_size": 65536 00:11:20.061 } 00:11:20.061 ] 00:11:20.061 }' 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.061 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.320 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:20.320 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.320 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.320 [2024-11-15 09:30:08.745179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:20.320 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.320 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:20.320 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.320 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.320 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.320 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.320 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.320 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.320 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.320 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.320 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.320 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.320 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.320 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.320 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.320 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.579 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.579 "name": "Existed_Raid", 00:11:20.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.579 "strip_size_kb": 64, 00:11:20.579 "state": "configuring", 00:11:20.579 "raid_level": "concat", 00:11:20.579 "superblock": false, 00:11:20.579 "num_base_bdevs": 4, 00:11:20.579 "num_base_bdevs_discovered": 2, 00:11:20.579 "num_base_bdevs_operational": 4, 00:11:20.579 "base_bdevs_list": [ 00:11:20.579 { 00:11:20.579 "name": "BaseBdev1", 00:11:20.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.579 "is_configured": false, 00:11:20.579 "data_offset": 0, 00:11:20.579 "data_size": 0 00:11:20.579 }, 00:11:20.579 { 00:11:20.579 "name": null, 00:11:20.579 "uuid": "2732ffa9-de3d-4a01-9823-2abbdd6fb3d5", 00:11:20.579 "is_configured": false, 00:11:20.579 "data_offset": 0, 00:11:20.579 "data_size": 65536 00:11:20.579 }, 00:11:20.579 { 00:11:20.579 "name": "BaseBdev3", 00:11:20.579 "uuid": "3f32d2f4-2fdf-4475-9698-668ea8a57c79", 00:11:20.579 "is_configured": true, 00:11:20.579 "data_offset": 0, 00:11:20.579 "data_size": 65536 00:11:20.579 }, 00:11:20.579 { 00:11:20.579 "name": "BaseBdev4", 00:11:20.579 "uuid": "a0bb272e-118e-4adc-a331-556d18f68c34", 00:11:20.579 "is_configured": true, 00:11:20.579 "data_offset": 0, 00:11:20.579 "data_size": 65536 00:11:20.579 } 00:11:20.579 ] 00:11:20.579 }' 00:11:20.579 09:30:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.579 09:30:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.839 [2024-11-15 09:30:09.204246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:20.839 BaseBdev1 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.839 [ 00:11:20.839 { 00:11:20.839 "name": "BaseBdev1", 00:11:20.839 "aliases": [ 00:11:20.839 "0603f60d-685d-46b3-b645-88142f898eb9" 00:11:20.839 ], 00:11:20.839 "product_name": "Malloc disk", 00:11:20.839 "block_size": 512, 00:11:20.839 "num_blocks": 65536, 00:11:20.839 "uuid": "0603f60d-685d-46b3-b645-88142f898eb9", 00:11:20.839 "assigned_rate_limits": { 00:11:20.839 "rw_ios_per_sec": 0, 00:11:20.839 "rw_mbytes_per_sec": 0, 00:11:20.839 "r_mbytes_per_sec": 0, 00:11:20.839 "w_mbytes_per_sec": 0 00:11:20.839 }, 00:11:20.839 "claimed": true, 00:11:20.839 "claim_type": "exclusive_write", 00:11:20.839 "zoned": false, 00:11:20.839 "supported_io_types": { 00:11:20.839 "read": true, 00:11:20.839 "write": true, 00:11:20.839 "unmap": true, 00:11:20.839 "flush": true, 00:11:20.839 "reset": true, 00:11:20.839 "nvme_admin": false, 00:11:20.839 "nvme_io": false, 00:11:20.839 "nvme_io_md": false, 00:11:20.839 "write_zeroes": true, 00:11:20.839 "zcopy": true, 00:11:20.839 "get_zone_info": false, 00:11:20.839 "zone_management": false, 00:11:20.839 "zone_append": false, 00:11:20.839 "compare": false, 00:11:20.839 "compare_and_write": false, 00:11:20.839 "abort": true, 00:11:20.839 "seek_hole": false, 00:11:20.839 "seek_data": false, 00:11:20.839 "copy": true, 00:11:20.839 "nvme_iov_md": false 00:11:20.839 }, 00:11:20.839 "memory_domains": [ 00:11:20.839 { 00:11:20.839 "dma_device_id": "system", 00:11:20.839 "dma_device_type": 1 00:11:20.839 }, 00:11:20.839 { 00:11:20.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.839 "dma_device_type": 2 00:11:20.839 } 00:11:20.839 ], 00:11:20.839 "driver_specific": {} 00:11:20.839 } 00:11:20.839 ] 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.839 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.100 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.100 "name": "Existed_Raid", 00:11:21.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.100 "strip_size_kb": 64, 00:11:21.100 "state": "configuring", 00:11:21.100 "raid_level": "concat", 00:11:21.100 "superblock": false, 00:11:21.100 "num_base_bdevs": 4, 00:11:21.100 "num_base_bdevs_discovered": 3, 00:11:21.100 "num_base_bdevs_operational": 4, 00:11:21.100 "base_bdevs_list": [ 00:11:21.100 { 00:11:21.100 "name": "BaseBdev1", 00:11:21.100 "uuid": "0603f60d-685d-46b3-b645-88142f898eb9", 00:11:21.100 "is_configured": true, 00:11:21.100 "data_offset": 0, 00:11:21.100 "data_size": 65536 00:11:21.100 }, 00:11:21.100 { 00:11:21.100 "name": null, 00:11:21.100 "uuid": "2732ffa9-de3d-4a01-9823-2abbdd6fb3d5", 00:11:21.100 "is_configured": false, 00:11:21.100 "data_offset": 0, 00:11:21.100 "data_size": 65536 00:11:21.100 }, 00:11:21.100 { 00:11:21.100 "name": "BaseBdev3", 00:11:21.100 "uuid": "3f32d2f4-2fdf-4475-9698-668ea8a57c79", 00:11:21.100 "is_configured": true, 00:11:21.100 "data_offset": 0, 00:11:21.100 "data_size": 65536 00:11:21.100 }, 00:11:21.100 { 00:11:21.100 "name": "BaseBdev4", 00:11:21.100 "uuid": "a0bb272e-118e-4adc-a331-556d18f68c34", 00:11:21.100 "is_configured": true, 00:11:21.100 "data_offset": 0, 00:11:21.100 "data_size": 65536 00:11:21.100 } 00:11:21.100 ] 00:11:21.100 }' 00:11:21.100 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.100 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.360 [2024-11-15 09:30:09.723501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.360 "name": "Existed_Raid", 00:11:21.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.360 "strip_size_kb": 64, 00:11:21.360 "state": "configuring", 00:11:21.360 "raid_level": "concat", 00:11:21.360 "superblock": false, 00:11:21.360 "num_base_bdevs": 4, 00:11:21.360 "num_base_bdevs_discovered": 2, 00:11:21.360 "num_base_bdevs_operational": 4, 00:11:21.360 "base_bdevs_list": [ 00:11:21.360 { 00:11:21.360 "name": "BaseBdev1", 00:11:21.360 "uuid": "0603f60d-685d-46b3-b645-88142f898eb9", 00:11:21.360 "is_configured": true, 00:11:21.360 "data_offset": 0, 00:11:21.360 "data_size": 65536 00:11:21.360 }, 00:11:21.360 { 00:11:21.360 "name": null, 00:11:21.360 "uuid": "2732ffa9-de3d-4a01-9823-2abbdd6fb3d5", 00:11:21.360 "is_configured": false, 00:11:21.360 "data_offset": 0, 00:11:21.360 "data_size": 65536 00:11:21.360 }, 00:11:21.360 { 00:11:21.360 "name": null, 00:11:21.360 "uuid": "3f32d2f4-2fdf-4475-9698-668ea8a57c79", 00:11:21.360 "is_configured": false, 00:11:21.360 "data_offset": 0, 00:11:21.360 "data_size": 65536 00:11:21.360 }, 00:11:21.360 { 00:11:21.360 "name": "BaseBdev4", 00:11:21.360 "uuid": "a0bb272e-118e-4adc-a331-556d18f68c34", 00:11:21.360 "is_configured": true, 00:11:21.360 "data_offset": 0, 00:11:21.360 "data_size": 65536 00:11:21.360 } 00:11:21.360 ] 00:11:21.360 }' 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.360 09:30:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.942 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.942 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:21.942 09:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.942 09:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.942 09:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.942 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:21.942 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:21.942 09:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.942 09:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.942 [2024-11-15 09:30:10.222646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:21.942 09:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.942 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:21.942 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.942 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.942 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.942 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.942 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.942 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.942 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.943 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.943 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.943 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.943 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.943 09:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.943 09:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.943 09:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.943 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.943 "name": "Existed_Raid", 00:11:21.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.943 "strip_size_kb": 64, 00:11:21.943 "state": "configuring", 00:11:21.943 "raid_level": "concat", 00:11:21.943 "superblock": false, 00:11:21.943 "num_base_bdevs": 4, 00:11:21.943 "num_base_bdevs_discovered": 3, 00:11:21.943 "num_base_bdevs_operational": 4, 00:11:21.943 "base_bdevs_list": [ 00:11:21.943 { 00:11:21.943 "name": "BaseBdev1", 00:11:21.943 "uuid": "0603f60d-685d-46b3-b645-88142f898eb9", 00:11:21.943 "is_configured": true, 00:11:21.943 "data_offset": 0, 00:11:21.943 "data_size": 65536 00:11:21.943 }, 00:11:21.943 { 00:11:21.943 "name": null, 00:11:21.943 "uuid": "2732ffa9-de3d-4a01-9823-2abbdd6fb3d5", 00:11:21.943 "is_configured": false, 00:11:21.943 "data_offset": 0, 00:11:21.943 "data_size": 65536 00:11:21.943 }, 00:11:21.943 { 00:11:21.943 "name": "BaseBdev3", 00:11:21.943 "uuid": "3f32d2f4-2fdf-4475-9698-668ea8a57c79", 00:11:21.943 "is_configured": true, 00:11:21.943 "data_offset": 0, 00:11:21.943 "data_size": 65536 00:11:21.943 }, 00:11:21.943 { 00:11:21.943 "name": "BaseBdev4", 00:11:21.943 "uuid": "a0bb272e-118e-4adc-a331-556d18f68c34", 00:11:21.943 "is_configured": true, 00:11:21.943 "data_offset": 0, 00:11:21.943 "data_size": 65536 00:11:21.943 } 00:11:21.943 ] 00:11:21.943 }' 00:11:21.943 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.943 09:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.229 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:22.229 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.229 09:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.229 09:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.229 09:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.229 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:22.229 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:22.229 09:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.229 09:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.229 [2024-11-15 09:30:10.689948] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:22.489 09:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.489 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:22.489 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.489 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.489 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.489 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.489 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.489 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.489 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.489 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.489 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.489 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.489 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.489 09:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.489 09:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.490 09:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.490 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.490 "name": "Existed_Raid", 00:11:22.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.490 "strip_size_kb": 64, 00:11:22.490 "state": "configuring", 00:11:22.490 "raid_level": "concat", 00:11:22.490 "superblock": false, 00:11:22.490 "num_base_bdevs": 4, 00:11:22.490 "num_base_bdevs_discovered": 2, 00:11:22.490 "num_base_bdevs_operational": 4, 00:11:22.490 "base_bdevs_list": [ 00:11:22.490 { 00:11:22.490 "name": null, 00:11:22.490 "uuid": "0603f60d-685d-46b3-b645-88142f898eb9", 00:11:22.490 "is_configured": false, 00:11:22.490 "data_offset": 0, 00:11:22.490 "data_size": 65536 00:11:22.490 }, 00:11:22.490 { 00:11:22.490 "name": null, 00:11:22.490 "uuid": "2732ffa9-de3d-4a01-9823-2abbdd6fb3d5", 00:11:22.490 "is_configured": false, 00:11:22.490 "data_offset": 0, 00:11:22.490 "data_size": 65536 00:11:22.490 }, 00:11:22.490 { 00:11:22.490 "name": "BaseBdev3", 00:11:22.490 "uuid": "3f32d2f4-2fdf-4475-9698-668ea8a57c79", 00:11:22.490 "is_configured": true, 00:11:22.490 "data_offset": 0, 00:11:22.490 "data_size": 65536 00:11:22.490 }, 00:11:22.490 { 00:11:22.490 "name": "BaseBdev4", 00:11:22.490 "uuid": "a0bb272e-118e-4adc-a331-556d18f68c34", 00:11:22.490 "is_configured": true, 00:11:22.490 "data_offset": 0, 00:11:22.490 "data_size": 65536 00:11:22.490 } 00:11:22.490 ] 00:11:22.490 }' 00:11:22.490 09:30:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.490 09:30:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.057 [2024-11-15 09:30:11.273928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.057 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.057 "name": "Existed_Raid", 00:11:23.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.057 "strip_size_kb": 64, 00:11:23.057 "state": "configuring", 00:11:23.057 "raid_level": "concat", 00:11:23.057 "superblock": false, 00:11:23.057 "num_base_bdevs": 4, 00:11:23.057 "num_base_bdevs_discovered": 3, 00:11:23.057 "num_base_bdevs_operational": 4, 00:11:23.057 "base_bdevs_list": [ 00:11:23.057 { 00:11:23.057 "name": null, 00:11:23.057 "uuid": "0603f60d-685d-46b3-b645-88142f898eb9", 00:11:23.057 "is_configured": false, 00:11:23.058 "data_offset": 0, 00:11:23.058 "data_size": 65536 00:11:23.058 }, 00:11:23.058 { 00:11:23.058 "name": "BaseBdev2", 00:11:23.058 "uuid": "2732ffa9-de3d-4a01-9823-2abbdd6fb3d5", 00:11:23.058 "is_configured": true, 00:11:23.058 "data_offset": 0, 00:11:23.058 "data_size": 65536 00:11:23.058 }, 00:11:23.058 { 00:11:23.058 "name": "BaseBdev3", 00:11:23.058 "uuid": "3f32d2f4-2fdf-4475-9698-668ea8a57c79", 00:11:23.058 "is_configured": true, 00:11:23.058 "data_offset": 0, 00:11:23.058 "data_size": 65536 00:11:23.058 }, 00:11:23.058 { 00:11:23.058 "name": "BaseBdev4", 00:11:23.058 "uuid": "a0bb272e-118e-4adc-a331-556d18f68c34", 00:11:23.058 "is_configured": true, 00:11:23.058 "data_offset": 0, 00:11:23.058 "data_size": 65536 00:11:23.058 } 00:11:23.058 ] 00:11:23.058 }' 00:11:23.058 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.058 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.317 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:23.317 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.317 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.317 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.317 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.317 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:23.317 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.317 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:23.317 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.317 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.317 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.317 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0603f60d-685d-46b3-b645-88142f898eb9 00:11:23.317 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.317 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.576 [2024-11-15 09:30:11.799840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:23.576 [2024-11-15 09:30:11.799926] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:23.576 [2024-11-15 09:30:11.799934] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:23.576 [2024-11-15 09:30:11.800239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:23.576 [2024-11-15 09:30:11.800411] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:23.576 [2024-11-15 09:30:11.800424] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:23.576 [2024-11-15 09:30:11.800709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.576 NewBaseBdev 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.576 [ 00:11:23.576 { 00:11:23.576 "name": "NewBaseBdev", 00:11:23.576 "aliases": [ 00:11:23.576 "0603f60d-685d-46b3-b645-88142f898eb9" 00:11:23.576 ], 00:11:23.576 "product_name": "Malloc disk", 00:11:23.576 "block_size": 512, 00:11:23.576 "num_blocks": 65536, 00:11:23.576 "uuid": "0603f60d-685d-46b3-b645-88142f898eb9", 00:11:23.576 "assigned_rate_limits": { 00:11:23.576 "rw_ios_per_sec": 0, 00:11:23.576 "rw_mbytes_per_sec": 0, 00:11:23.576 "r_mbytes_per_sec": 0, 00:11:23.576 "w_mbytes_per_sec": 0 00:11:23.576 }, 00:11:23.576 "claimed": true, 00:11:23.576 "claim_type": "exclusive_write", 00:11:23.576 "zoned": false, 00:11:23.576 "supported_io_types": { 00:11:23.576 "read": true, 00:11:23.576 "write": true, 00:11:23.576 "unmap": true, 00:11:23.576 "flush": true, 00:11:23.576 "reset": true, 00:11:23.576 "nvme_admin": false, 00:11:23.576 "nvme_io": false, 00:11:23.576 "nvme_io_md": false, 00:11:23.576 "write_zeroes": true, 00:11:23.576 "zcopy": true, 00:11:23.576 "get_zone_info": false, 00:11:23.576 "zone_management": false, 00:11:23.576 "zone_append": false, 00:11:23.576 "compare": false, 00:11:23.576 "compare_and_write": false, 00:11:23.576 "abort": true, 00:11:23.576 "seek_hole": false, 00:11:23.576 "seek_data": false, 00:11:23.576 "copy": true, 00:11:23.576 "nvme_iov_md": false 00:11:23.576 }, 00:11:23.576 "memory_domains": [ 00:11:23.576 { 00:11:23.576 "dma_device_id": "system", 00:11:23.576 "dma_device_type": 1 00:11:23.576 }, 00:11:23.576 { 00:11:23.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.576 "dma_device_type": 2 00:11:23.576 } 00:11:23.576 ], 00:11:23.576 "driver_specific": {} 00:11:23.576 } 00:11:23.576 ] 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.576 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.577 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.577 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.577 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.577 "name": "Existed_Raid", 00:11:23.577 "uuid": "bd1ff0c2-d695-45c5-9a29-8e697a609cf3", 00:11:23.577 "strip_size_kb": 64, 00:11:23.577 "state": "online", 00:11:23.577 "raid_level": "concat", 00:11:23.577 "superblock": false, 00:11:23.577 "num_base_bdevs": 4, 00:11:23.577 "num_base_bdevs_discovered": 4, 00:11:23.577 "num_base_bdevs_operational": 4, 00:11:23.577 "base_bdevs_list": [ 00:11:23.577 { 00:11:23.577 "name": "NewBaseBdev", 00:11:23.577 "uuid": "0603f60d-685d-46b3-b645-88142f898eb9", 00:11:23.577 "is_configured": true, 00:11:23.577 "data_offset": 0, 00:11:23.577 "data_size": 65536 00:11:23.577 }, 00:11:23.577 { 00:11:23.577 "name": "BaseBdev2", 00:11:23.577 "uuid": "2732ffa9-de3d-4a01-9823-2abbdd6fb3d5", 00:11:23.577 "is_configured": true, 00:11:23.577 "data_offset": 0, 00:11:23.577 "data_size": 65536 00:11:23.577 }, 00:11:23.577 { 00:11:23.577 "name": "BaseBdev3", 00:11:23.577 "uuid": "3f32d2f4-2fdf-4475-9698-668ea8a57c79", 00:11:23.577 "is_configured": true, 00:11:23.577 "data_offset": 0, 00:11:23.577 "data_size": 65536 00:11:23.577 }, 00:11:23.577 { 00:11:23.577 "name": "BaseBdev4", 00:11:23.577 "uuid": "a0bb272e-118e-4adc-a331-556d18f68c34", 00:11:23.577 "is_configured": true, 00:11:23.577 "data_offset": 0, 00:11:23.577 "data_size": 65536 00:11:23.577 } 00:11:23.577 ] 00:11:23.577 }' 00:11:23.577 09:30:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.577 09:30:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.146 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:24.146 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:24.146 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:24.146 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:24.146 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:24.146 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:24.146 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:24.146 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.146 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.146 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:24.146 [2024-11-15 09:30:12.323418] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.146 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.146 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:24.146 "name": "Existed_Raid", 00:11:24.146 "aliases": [ 00:11:24.146 "bd1ff0c2-d695-45c5-9a29-8e697a609cf3" 00:11:24.146 ], 00:11:24.146 "product_name": "Raid Volume", 00:11:24.146 "block_size": 512, 00:11:24.146 "num_blocks": 262144, 00:11:24.146 "uuid": "bd1ff0c2-d695-45c5-9a29-8e697a609cf3", 00:11:24.146 "assigned_rate_limits": { 00:11:24.146 "rw_ios_per_sec": 0, 00:11:24.146 "rw_mbytes_per_sec": 0, 00:11:24.146 "r_mbytes_per_sec": 0, 00:11:24.146 "w_mbytes_per_sec": 0 00:11:24.146 }, 00:11:24.146 "claimed": false, 00:11:24.146 "zoned": false, 00:11:24.146 "supported_io_types": { 00:11:24.146 "read": true, 00:11:24.146 "write": true, 00:11:24.146 "unmap": true, 00:11:24.146 "flush": true, 00:11:24.146 "reset": true, 00:11:24.146 "nvme_admin": false, 00:11:24.146 "nvme_io": false, 00:11:24.146 "nvme_io_md": false, 00:11:24.146 "write_zeroes": true, 00:11:24.146 "zcopy": false, 00:11:24.146 "get_zone_info": false, 00:11:24.146 "zone_management": false, 00:11:24.146 "zone_append": false, 00:11:24.146 "compare": false, 00:11:24.146 "compare_and_write": false, 00:11:24.146 "abort": false, 00:11:24.146 "seek_hole": false, 00:11:24.146 "seek_data": false, 00:11:24.146 "copy": false, 00:11:24.146 "nvme_iov_md": false 00:11:24.146 }, 00:11:24.146 "memory_domains": [ 00:11:24.146 { 00:11:24.146 "dma_device_id": "system", 00:11:24.146 "dma_device_type": 1 00:11:24.146 }, 00:11:24.146 { 00:11:24.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.146 "dma_device_type": 2 00:11:24.146 }, 00:11:24.146 { 00:11:24.146 "dma_device_id": "system", 00:11:24.146 "dma_device_type": 1 00:11:24.146 }, 00:11:24.146 { 00:11:24.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.146 "dma_device_type": 2 00:11:24.146 }, 00:11:24.146 { 00:11:24.146 "dma_device_id": "system", 00:11:24.146 "dma_device_type": 1 00:11:24.146 }, 00:11:24.146 { 00:11:24.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.146 "dma_device_type": 2 00:11:24.146 }, 00:11:24.146 { 00:11:24.146 "dma_device_id": "system", 00:11:24.146 "dma_device_type": 1 00:11:24.146 }, 00:11:24.146 { 00:11:24.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.146 "dma_device_type": 2 00:11:24.146 } 00:11:24.146 ], 00:11:24.146 "driver_specific": { 00:11:24.146 "raid": { 00:11:24.146 "uuid": "bd1ff0c2-d695-45c5-9a29-8e697a609cf3", 00:11:24.146 "strip_size_kb": 64, 00:11:24.146 "state": "online", 00:11:24.146 "raid_level": "concat", 00:11:24.146 "superblock": false, 00:11:24.146 "num_base_bdevs": 4, 00:11:24.146 "num_base_bdevs_discovered": 4, 00:11:24.146 "num_base_bdevs_operational": 4, 00:11:24.146 "base_bdevs_list": [ 00:11:24.146 { 00:11:24.146 "name": "NewBaseBdev", 00:11:24.146 "uuid": "0603f60d-685d-46b3-b645-88142f898eb9", 00:11:24.146 "is_configured": true, 00:11:24.146 "data_offset": 0, 00:11:24.146 "data_size": 65536 00:11:24.146 }, 00:11:24.146 { 00:11:24.146 "name": "BaseBdev2", 00:11:24.146 "uuid": "2732ffa9-de3d-4a01-9823-2abbdd6fb3d5", 00:11:24.146 "is_configured": true, 00:11:24.146 "data_offset": 0, 00:11:24.146 "data_size": 65536 00:11:24.147 }, 00:11:24.147 { 00:11:24.147 "name": "BaseBdev3", 00:11:24.147 "uuid": "3f32d2f4-2fdf-4475-9698-668ea8a57c79", 00:11:24.147 "is_configured": true, 00:11:24.147 "data_offset": 0, 00:11:24.147 "data_size": 65536 00:11:24.147 }, 00:11:24.147 { 00:11:24.147 "name": "BaseBdev4", 00:11:24.147 "uuid": "a0bb272e-118e-4adc-a331-556d18f68c34", 00:11:24.147 "is_configured": true, 00:11:24.147 "data_offset": 0, 00:11:24.147 "data_size": 65536 00:11:24.147 } 00:11:24.147 ] 00:11:24.147 } 00:11:24.147 } 00:11:24.147 }' 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:24.147 BaseBdev2 00:11:24.147 BaseBdev3 00:11:24.147 BaseBdev4' 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.147 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.407 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.407 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.407 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.407 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:24.407 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.407 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.407 [2024-11-15 09:30:12.662455] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:24.407 [2024-11-15 09:30:12.662568] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:24.407 [2024-11-15 09:30:12.662687] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:24.407 [2024-11-15 09:30:12.662768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:24.407 [2024-11-15 09:30:12.662780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:24.407 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.407 09:30:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71651 00:11:24.407 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 71651 ']' 00:11:24.407 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 71651 00:11:24.407 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:11:24.407 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:24.407 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71651 00:11:24.407 killing process with pid 71651 00:11:24.407 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:24.407 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:24.407 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71651' 00:11:24.407 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 71651 00:11:24.407 [2024-11-15 09:30:12.700582] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:24.407 09:30:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 71651 00:11:24.975 [2024-11-15 09:30:13.141718] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:25.950 09:30:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:25.950 00:11:25.950 real 0m11.854s 00:11:25.950 user 0m18.465s 00:11:25.950 sys 0m2.226s 00:11:25.950 09:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:25.950 ************************************ 00:11:25.950 END TEST raid_state_function_test 00:11:25.950 ************************************ 00:11:25.950 09:30:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.210 09:30:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:26.210 09:30:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:26.210 09:30:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:26.210 09:30:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:26.210 ************************************ 00:11:26.210 START TEST raid_state_function_test_sb 00:11:26.210 ************************************ 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 true 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72328 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72328' 00:11:26.210 Process raid pid: 72328 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72328 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 72328 ']' 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:26.210 09:30:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.210 [2024-11-15 09:30:14.552535] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:11:26.210 [2024-11-15 09:30:14.552783] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.470 [2024-11-15 09:30:14.736696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.470 [2024-11-15 09:30:14.878389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.729 [2024-11-15 09:30:15.132737] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.729 [2024-11-15 09:30:15.132804] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.988 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:26.988 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:11:26.988 09:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:26.988 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.988 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.988 [2024-11-15 09:30:15.416683] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.988 [2024-11-15 09:30:15.416747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.988 [2024-11-15 09:30:15.416760] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:26.988 [2024-11-15 09:30:15.416773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:26.988 [2024-11-15 09:30:15.416781] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:26.988 [2024-11-15 09:30:15.416792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:26.988 [2024-11-15 09:30:15.416800] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:26.988 [2024-11-15 09:30:15.416811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:26.988 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.988 09:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:26.988 09:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.988 09:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.988 09:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.988 09:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.988 09:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.988 09:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.988 09:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.988 09:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.988 09:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.988 09:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.988 09:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.988 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.988 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.988 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.248 09:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.248 "name": "Existed_Raid", 00:11:27.248 "uuid": "8c213272-6747-4d15-818b-73978929369c", 00:11:27.248 "strip_size_kb": 64, 00:11:27.248 "state": "configuring", 00:11:27.248 "raid_level": "concat", 00:11:27.248 "superblock": true, 00:11:27.248 "num_base_bdevs": 4, 00:11:27.248 "num_base_bdevs_discovered": 0, 00:11:27.248 "num_base_bdevs_operational": 4, 00:11:27.248 "base_bdevs_list": [ 00:11:27.248 { 00:11:27.248 "name": "BaseBdev1", 00:11:27.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.248 "is_configured": false, 00:11:27.248 "data_offset": 0, 00:11:27.248 "data_size": 0 00:11:27.248 }, 00:11:27.248 { 00:11:27.248 "name": "BaseBdev2", 00:11:27.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.248 "is_configured": false, 00:11:27.248 "data_offset": 0, 00:11:27.248 "data_size": 0 00:11:27.248 }, 00:11:27.248 { 00:11:27.248 "name": "BaseBdev3", 00:11:27.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.248 "is_configured": false, 00:11:27.248 "data_offset": 0, 00:11:27.248 "data_size": 0 00:11:27.248 }, 00:11:27.248 { 00:11:27.248 "name": "BaseBdev4", 00:11:27.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.248 "is_configured": false, 00:11:27.248 "data_offset": 0, 00:11:27.248 "data_size": 0 00:11:27.248 } 00:11:27.248 ] 00:11:27.248 }' 00:11:27.248 09:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.248 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.507 09:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:27.507 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.507 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.507 [2024-11-15 09:30:15.923769] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:27.507 [2024-11-15 09:30:15.923819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:27.507 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.507 09:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:27.507 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.507 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.507 [2024-11-15 09:30:15.935767] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:27.507 [2024-11-15 09:30:15.935906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:27.507 [2024-11-15 09:30:15.935969] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:27.507 [2024-11-15 09:30:15.935998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:27.507 [2024-11-15 09:30:15.936021] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:27.507 [2024-11-15 09:30:15.936046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:27.507 [2024-11-15 09:30:15.936067] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:27.507 [2024-11-15 09:30:15.936110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:27.507 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.507 09:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:27.507 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.507 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.767 [2024-11-15 09:30:15.993071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.767 BaseBdev1 00:11:27.767 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.767 09:30:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:27.767 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:27.767 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:27.767 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:27.767 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:27.767 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:27.767 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:27.767 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.767 09:30:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.767 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.767 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:27.767 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.767 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.767 [ 00:11:27.767 { 00:11:27.767 "name": "BaseBdev1", 00:11:27.767 "aliases": [ 00:11:27.767 "a2663919-fe30-4835-b497-6b86eecfc69b" 00:11:27.767 ], 00:11:27.767 "product_name": "Malloc disk", 00:11:27.767 "block_size": 512, 00:11:27.767 "num_blocks": 65536, 00:11:27.767 "uuid": "a2663919-fe30-4835-b497-6b86eecfc69b", 00:11:27.767 "assigned_rate_limits": { 00:11:27.767 "rw_ios_per_sec": 0, 00:11:27.767 "rw_mbytes_per_sec": 0, 00:11:27.767 "r_mbytes_per_sec": 0, 00:11:27.767 "w_mbytes_per_sec": 0 00:11:27.767 }, 00:11:27.767 "claimed": true, 00:11:27.767 "claim_type": "exclusive_write", 00:11:27.767 "zoned": false, 00:11:27.767 "supported_io_types": { 00:11:27.767 "read": true, 00:11:27.767 "write": true, 00:11:27.767 "unmap": true, 00:11:27.767 "flush": true, 00:11:27.767 "reset": true, 00:11:27.767 "nvme_admin": false, 00:11:27.767 "nvme_io": false, 00:11:27.767 "nvme_io_md": false, 00:11:27.767 "write_zeroes": true, 00:11:27.767 "zcopy": true, 00:11:27.767 "get_zone_info": false, 00:11:27.767 "zone_management": false, 00:11:27.767 "zone_append": false, 00:11:27.767 "compare": false, 00:11:27.767 "compare_and_write": false, 00:11:27.767 "abort": true, 00:11:27.767 "seek_hole": false, 00:11:27.767 "seek_data": false, 00:11:27.767 "copy": true, 00:11:27.767 "nvme_iov_md": false 00:11:27.767 }, 00:11:27.767 "memory_domains": [ 00:11:27.767 { 00:11:27.767 "dma_device_id": "system", 00:11:27.767 "dma_device_type": 1 00:11:27.767 }, 00:11:27.767 { 00:11:27.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.767 "dma_device_type": 2 00:11:27.767 } 00:11:27.767 ], 00:11:27.767 "driver_specific": {} 00:11:27.767 } 00:11:27.767 ] 00:11:27.767 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.767 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:27.767 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:27.767 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.767 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.767 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.767 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.767 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.767 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.767 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.767 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.767 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.767 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.767 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.767 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.767 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.767 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.767 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.767 "name": "Existed_Raid", 00:11:27.767 "uuid": "3c4a940a-5d4b-4bad-8159-92742e0a0531", 00:11:27.767 "strip_size_kb": 64, 00:11:27.767 "state": "configuring", 00:11:27.767 "raid_level": "concat", 00:11:27.767 "superblock": true, 00:11:27.767 "num_base_bdevs": 4, 00:11:27.767 "num_base_bdevs_discovered": 1, 00:11:27.767 "num_base_bdevs_operational": 4, 00:11:27.767 "base_bdevs_list": [ 00:11:27.767 { 00:11:27.767 "name": "BaseBdev1", 00:11:27.767 "uuid": "a2663919-fe30-4835-b497-6b86eecfc69b", 00:11:27.767 "is_configured": true, 00:11:27.767 "data_offset": 2048, 00:11:27.767 "data_size": 63488 00:11:27.767 }, 00:11:27.767 { 00:11:27.767 "name": "BaseBdev2", 00:11:27.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.767 "is_configured": false, 00:11:27.767 "data_offset": 0, 00:11:27.767 "data_size": 0 00:11:27.767 }, 00:11:27.767 { 00:11:27.767 "name": "BaseBdev3", 00:11:27.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.767 "is_configured": false, 00:11:27.767 "data_offset": 0, 00:11:27.767 "data_size": 0 00:11:27.767 }, 00:11:27.767 { 00:11:27.767 "name": "BaseBdev4", 00:11:27.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.767 "is_configured": false, 00:11:27.767 "data_offset": 0, 00:11:27.767 "data_size": 0 00:11:27.767 } 00:11:27.767 ] 00:11:27.767 }' 00:11:27.767 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.767 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.027 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:28.027 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.027 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.027 [2024-11-15 09:30:16.456395] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:28.027 [2024-11-15 09:30:16.456467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:28.027 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.027 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:28.027 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.027 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.027 [2024-11-15 09:30:16.468422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.027 [2024-11-15 09:30:16.470771] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:28.027 [2024-11-15 09:30:16.470813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:28.027 [2024-11-15 09:30:16.470841] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:28.027 [2024-11-15 09:30:16.470852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:28.027 [2024-11-15 09:30:16.470859] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:28.027 [2024-11-15 09:30:16.470878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:28.027 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.027 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:28.027 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:28.027 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.027 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.027 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.027 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.027 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.027 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.027 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.027 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.027 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.027 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.027 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.027 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.027 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.027 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.286 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.286 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.286 "name": "Existed_Raid", 00:11:28.286 "uuid": "f9e6c402-0f5d-4189-85d6-412b5458dfba", 00:11:28.286 "strip_size_kb": 64, 00:11:28.286 "state": "configuring", 00:11:28.286 "raid_level": "concat", 00:11:28.286 "superblock": true, 00:11:28.286 "num_base_bdevs": 4, 00:11:28.286 "num_base_bdevs_discovered": 1, 00:11:28.286 "num_base_bdevs_operational": 4, 00:11:28.286 "base_bdevs_list": [ 00:11:28.286 { 00:11:28.286 "name": "BaseBdev1", 00:11:28.286 "uuid": "a2663919-fe30-4835-b497-6b86eecfc69b", 00:11:28.286 "is_configured": true, 00:11:28.286 "data_offset": 2048, 00:11:28.286 "data_size": 63488 00:11:28.286 }, 00:11:28.286 { 00:11:28.286 "name": "BaseBdev2", 00:11:28.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.286 "is_configured": false, 00:11:28.286 "data_offset": 0, 00:11:28.286 "data_size": 0 00:11:28.286 }, 00:11:28.286 { 00:11:28.286 "name": "BaseBdev3", 00:11:28.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.286 "is_configured": false, 00:11:28.286 "data_offset": 0, 00:11:28.286 "data_size": 0 00:11:28.286 }, 00:11:28.286 { 00:11:28.286 "name": "BaseBdev4", 00:11:28.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.286 "is_configured": false, 00:11:28.286 "data_offset": 0, 00:11:28.286 "data_size": 0 00:11:28.286 } 00:11:28.286 ] 00:11:28.286 }' 00:11:28.286 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.286 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.545 [2024-11-15 09:30:16.939728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:28.545 BaseBdev2 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.545 [ 00:11:28.545 { 00:11:28.545 "name": "BaseBdev2", 00:11:28.545 "aliases": [ 00:11:28.545 "d058db5e-a8e1-46e9-b174-cce9507af488" 00:11:28.545 ], 00:11:28.545 "product_name": "Malloc disk", 00:11:28.545 "block_size": 512, 00:11:28.545 "num_blocks": 65536, 00:11:28.545 "uuid": "d058db5e-a8e1-46e9-b174-cce9507af488", 00:11:28.545 "assigned_rate_limits": { 00:11:28.545 "rw_ios_per_sec": 0, 00:11:28.545 "rw_mbytes_per_sec": 0, 00:11:28.545 "r_mbytes_per_sec": 0, 00:11:28.545 "w_mbytes_per_sec": 0 00:11:28.545 }, 00:11:28.545 "claimed": true, 00:11:28.545 "claim_type": "exclusive_write", 00:11:28.545 "zoned": false, 00:11:28.545 "supported_io_types": { 00:11:28.545 "read": true, 00:11:28.545 "write": true, 00:11:28.545 "unmap": true, 00:11:28.545 "flush": true, 00:11:28.545 "reset": true, 00:11:28.545 "nvme_admin": false, 00:11:28.545 "nvme_io": false, 00:11:28.545 "nvme_io_md": false, 00:11:28.545 "write_zeroes": true, 00:11:28.545 "zcopy": true, 00:11:28.545 "get_zone_info": false, 00:11:28.545 "zone_management": false, 00:11:28.545 "zone_append": false, 00:11:28.545 "compare": false, 00:11:28.545 "compare_and_write": false, 00:11:28.545 "abort": true, 00:11:28.545 "seek_hole": false, 00:11:28.545 "seek_data": false, 00:11:28.545 "copy": true, 00:11:28.545 "nvme_iov_md": false 00:11:28.545 }, 00:11:28.545 "memory_domains": [ 00:11:28.545 { 00:11:28.545 "dma_device_id": "system", 00:11:28.545 "dma_device_type": 1 00:11:28.545 }, 00:11:28.545 { 00:11:28.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.545 "dma_device_type": 2 00:11:28.545 } 00:11:28.545 ], 00:11:28.545 "driver_specific": {} 00:11:28.545 } 00:11:28.545 ] 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.545 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.546 09:30:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.546 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.546 09:30:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.546 09:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.805 09:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.805 "name": "Existed_Raid", 00:11:28.805 "uuid": "f9e6c402-0f5d-4189-85d6-412b5458dfba", 00:11:28.805 "strip_size_kb": 64, 00:11:28.805 "state": "configuring", 00:11:28.805 "raid_level": "concat", 00:11:28.805 "superblock": true, 00:11:28.805 "num_base_bdevs": 4, 00:11:28.805 "num_base_bdevs_discovered": 2, 00:11:28.805 "num_base_bdevs_operational": 4, 00:11:28.805 "base_bdevs_list": [ 00:11:28.805 { 00:11:28.805 "name": "BaseBdev1", 00:11:28.805 "uuid": "a2663919-fe30-4835-b497-6b86eecfc69b", 00:11:28.805 "is_configured": true, 00:11:28.805 "data_offset": 2048, 00:11:28.805 "data_size": 63488 00:11:28.805 }, 00:11:28.805 { 00:11:28.805 "name": "BaseBdev2", 00:11:28.805 "uuid": "d058db5e-a8e1-46e9-b174-cce9507af488", 00:11:28.805 "is_configured": true, 00:11:28.805 "data_offset": 2048, 00:11:28.805 "data_size": 63488 00:11:28.805 }, 00:11:28.805 { 00:11:28.805 "name": "BaseBdev3", 00:11:28.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.805 "is_configured": false, 00:11:28.805 "data_offset": 0, 00:11:28.805 "data_size": 0 00:11:28.805 }, 00:11:28.805 { 00:11:28.805 "name": "BaseBdev4", 00:11:28.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.805 "is_configured": false, 00:11:28.805 "data_offset": 0, 00:11:28.805 "data_size": 0 00:11:28.805 } 00:11:28.805 ] 00:11:28.805 }' 00:11:28.805 09:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.805 09:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.065 09:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:29.065 09:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.065 09:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.065 [2024-11-15 09:30:17.519521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:29.065 BaseBdev3 00:11:29.065 09:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.065 09:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:29.065 09:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:29.065 09:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:29.065 09:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:29.065 09:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:29.065 09:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:29.065 09:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:29.065 09:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.065 09:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.324 09:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.324 09:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:29.324 09:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.324 09:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.324 [ 00:11:29.324 { 00:11:29.324 "name": "BaseBdev3", 00:11:29.324 "aliases": [ 00:11:29.324 "04a83a64-c145-430f-b6d4-4ea266674a9e" 00:11:29.324 ], 00:11:29.324 "product_name": "Malloc disk", 00:11:29.324 "block_size": 512, 00:11:29.324 "num_blocks": 65536, 00:11:29.324 "uuid": "04a83a64-c145-430f-b6d4-4ea266674a9e", 00:11:29.324 "assigned_rate_limits": { 00:11:29.324 "rw_ios_per_sec": 0, 00:11:29.324 "rw_mbytes_per_sec": 0, 00:11:29.324 "r_mbytes_per_sec": 0, 00:11:29.324 "w_mbytes_per_sec": 0 00:11:29.324 }, 00:11:29.324 "claimed": true, 00:11:29.324 "claim_type": "exclusive_write", 00:11:29.324 "zoned": false, 00:11:29.324 "supported_io_types": { 00:11:29.324 "read": true, 00:11:29.324 "write": true, 00:11:29.324 "unmap": true, 00:11:29.324 "flush": true, 00:11:29.324 "reset": true, 00:11:29.324 "nvme_admin": false, 00:11:29.324 "nvme_io": false, 00:11:29.324 "nvme_io_md": false, 00:11:29.324 "write_zeroes": true, 00:11:29.324 "zcopy": true, 00:11:29.324 "get_zone_info": false, 00:11:29.324 "zone_management": false, 00:11:29.324 "zone_append": false, 00:11:29.324 "compare": false, 00:11:29.324 "compare_and_write": false, 00:11:29.324 "abort": true, 00:11:29.324 "seek_hole": false, 00:11:29.324 "seek_data": false, 00:11:29.324 "copy": true, 00:11:29.324 "nvme_iov_md": false 00:11:29.324 }, 00:11:29.324 "memory_domains": [ 00:11:29.324 { 00:11:29.324 "dma_device_id": "system", 00:11:29.324 "dma_device_type": 1 00:11:29.324 }, 00:11:29.324 { 00:11:29.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.324 "dma_device_type": 2 00:11:29.324 } 00:11:29.324 ], 00:11:29.324 "driver_specific": {} 00:11:29.324 } 00:11:29.324 ] 00:11:29.324 09:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.324 09:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:29.324 09:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:29.324 09:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:29.324 09:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:29.324 09:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.324 09:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.324 09:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.324 09:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.324 09:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.324 09:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.324 09:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.324 09:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.324 09:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.324 09:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.324 09:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.324 09:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.324 09:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.324 09:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.324 09:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.324 "name": "Existed_Raid", 00:11:29.324 "uuid": "f9e6c402-0f5d-4189-85d6-412b5458dfba", 00:11:29.324 "strip_size_kb": 64, 00:11:29.324 "state": "configuring", 00:11:29.324 "raid_level": "concat", 00:11:29.324 "superblock": true, 00:11:29.324 "num_base_bdevs": 4, 00:11:29.324 "num_base_bdevs_discovered": 3, 00:11:29.324 "num_base_bdevs_operational": 4, 00:11:29.324 "base_bdevs_list": [ 00:11:29.324 { 00:11:29.324 "name": "BaseBdev1", 00:11:29.324 "uuid": "a2663919-fe30-4835-b497-6b86eecfc69b", 00:11:29.324 "is_configured": true, 00:11:29.324 "data_offset": 2048, 00:11:29.324 "data_size": 63488 00:11:29.324 }, 00:11:29.324 { 00:11:29.324 "name": "BaseBdev2", 00:11:29.324 "uuid": "d058db5e-a8e1-46e9-b174-cce9507af488", 00:11:29.324 "is_configured": true, 00:11:29.324 "data_offset": 2048, 00:11:29.325 "data_size": 63488 00:11:29.325 }, 00:11:29.325 { 00:11:29.325 "name": "BaseBdev3", 00:11:29.325 "uuid": "04a83a64-c145-430f-b6d4-4ea266674a9e", 00:11:29.325 "is_configured": true, 00:11:29.325 "data_offset": 2048, 00:11:29.325 "data_size": 63488 00:11:29.325 }, 00:11:29.325 { 00:11:29.325 "name": "BaseBdev4", 00:11:29.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.325 "is_configured": false, 00:11:29.325 "data_offset": 0, 00:11:29.325 "data_size": 0 00:11:29.325 } 00:11:29.325 ] 00:11:29.325 }' 00:11:29.325 09:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.325 09:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.583 09:30:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:29.583 09:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.583 09:30:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.583 [2024-11-15 09:30:18.041705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:29.583 [2024-11-15 09:30:18.042070] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:29.583 [2024-11-15 09:30:18.042091] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:29.583 BaseBdev4 00:11:29.583 [2024-11-15 09:30:18.042436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:29.583 [2024-11-15 09:30:18.042619] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:29.583 [2024-11-15 09:30:18.042636] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:29.583 [2024-11-15 09:30:18.042804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.583 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.583 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:29.583 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:29.583 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:29.583 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:29.583 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:29.583 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:29.583 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:29.583 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.583 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.843 [ 00:11:29.843 { 00:11:29.843 "name": "BaseBdev4", 00:11:29.843 "aliases": [ 00:11:29.843 "88600492-272b-4a2a-a22f-487c0c75d51e" 00:11:29.843 ], 00:11:29.843 "product_name": "Malloc disk", 00:11:29.843 "block_size": 512, 00:11:29.843 "num_blocks": 65536, 00:11:29.843 "uuid": "88600492-272b-4a2a-a22f-487c0c75d51e", 00:11:29.843 "assigned_rate_limits": { 00:11:29.843 "rw_ios_per_sec": 0, 00:11:29.843 "rw_mbytes_per_sec": 0, 00:11:29.843 "r_mbytes_per_sec": 0, 00:11:29.843 "w_mbytes_per_sec": 0 00:11:29.843 }, 00:11:29.843 "claimed": true, 00:11:29.843 "claim_type": "exclusive_write", 00:11:29.843 "zoned": false, 00:11:29.843 "supported_io_types": { 00:11:29.843 "read": true, 00:11:29.843 "write": true, 00:11:29.843 "unmap": true, 00:11:29.843 "flush": true, 00:11:29.843 "reset": true, 00:11:29.843 "nvme_admin": false, 00:11:29.843 "nvme_io": false, 00:11:29.843 "nvme_io_md": false, 00:11:29.843 "write_zeroes": true, 00:11:29.843 "zcopy": true, 00:11:29.843 "get_zone_info": false, 00:11:29.843 "zone_management": false, 00:11:29.843 "zone_append": false, 00:11:29.843 "compare": false, 00:11:29.843 "compare_and_write": false, 00:11:29.843 "abort": true, 00:11:29.843 "seek_hole": false, 00:11:29.843 "seek_data": false, 00:11:29.843 "copy": true, 00:11:29.843 "nvme_iov_md": false 00:11:29.843 }, 00:11:29.843 "memory_domains": [ 00:11:29.843 { 00:11:29.843 "dma_device_id": "system", 00:11:29.843 "dma_device_type": 1 00:11:29.843 }, 00:11:29.843 { 00:11:29.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.843 "dma_device_type": 2 00:11:29.843 } 00:11:29.843 ], 00:11:29.843 "driver_specific": {} 00:11:29.843 } 00:11:29.843 ] 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.843 "name": "Existed_Raid", 00:11:29.843 "uuid": "f9e6c402-0f5d-4189-85d6-412b5458dfba", 00:11:29.843 "strip_size_kb": 64, 00:11:29.843 "state": "online", 00:11:29.843 "raid_level": "concat", 00:11:29.843 "superblock": true, 00:11:29.843 "num_base_bdevs": 4, 00:11:29.843 "num_base_bdevs_discovered": 4, 00:11:29.843 "num_base_bdevs_operational": 4, 00:11:29.843 "base_bdevs_list": [ 00:11:29.843 { 00:11:29.843 "name": "BaseBdev1", 00:11:29.843 "uuid": "a2663919-fe30-4835-b497-6b86eecfc69b", 00:11:29.843 "is_configured": true, 00:11:29.843 "data_offset": 2048, 00:11:29.843 "data_size": 63488 00:11:29.843 }, 00:11:29.843 { 00:11:29.843 "name": "BaseBdev2", 00:11:29.843 "uuid": "d058db5e-a8e1-46e9-b174-cce9507af488", 00:11:29.843 "is_configured": true, 00:11:29.843 "data_offset": 2048, 00:11:29.843 "data_size": 63488 00:11:29.843 }, 00:11:29.843 { 00:11:29.843 "name": "BaseBdev3", 00:11:29.843 "uuid": "04a83a64-c145-430f-b6d4-4ea266674a9e", 00:11:29.843 "is_configured": true, 00:11:29.843 "data_offset": 2048, 00:11:29.843 "data_size": 63488 00:11:29.843 }, 00:11:29.843 { 00:11:29.843 "name": "BaseBdev4", 00:11:29.843 "uuid": "88600492-272b-4a2a-a22f-487c0c75d51e", 00:11:29.843 "is_configured": true, 00:11:29.843 "data_offset": 2048, 00:11:29.843 "data_size": 63488 00:11:29.843 } 00:11:29.843 ] 00:11:29.843 }' 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.843 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.102 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:30.102 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:30.102 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:30.102 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:30.102 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:30.102 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:30.102 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:30.102 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:30.102 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.102 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.102 [2024-11-15 09:30:18.541358] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.102 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:30.362 "name": "Existed_Raid", 00:11:30.362 "aliases": [ 00:11:30.362 "f9e6c402-0f5d-4189-85d6-412b5458dfba" 00:11:30.362 ], 00:11:30.362 "product_name": "Raid Volume", 00:11:30.362 "block_size": 512, 00:11:30.362 "num_blocks": 253952, 00:11:30.362 "uuid": "f9e6c402-0f5d-4189-85d6-412b5458dfba", 00:11:30.362 "assigned_rate_limits": { 00:11:30.362 "rw_ios_per_sec": 0, 00:11:30.362 "rw_mbytes_per_sec": 0, 00:11:30.362 "r_mbytes_per_sec": 0, 00:11:30.362 "w_mbytes_per_sec": 0 00:11:30.362 }, 00:11:30.362 "claimed": false, 00:11:30.362 "zoned": false, 00:11:30.362 "supported_io_types": { 00:11:30.362 "read": true, 00:11:30.362 "write": true, 00:11:30.362 "unmap": true, 00:11:30.362 "flush": true, 00:11:30.362 "reset": true, 00:11:30.362 "nvme_admin": false, 00:11:30.362 "nvme_io": false, 00:11:30.362 "nvme_io_md": false, 00:11:30.362 "write_zeroes": true, 00:11:30.362 "zcopy": false, 00:11:30.362 "get_zone_info": false, 00:11:30.362 "zone_management": false, 00:11:30.362 "zone_append": false, 00:11:30.362 "compare": false, 00:11:30.362 "compare_and_write": false, 00:11:30.362 "abort": false, 00:11:30.362 "seek_hole": false, 00:11:30.362 "seek_data": false, 00:11:30.362 "copy": false, 00:11:30.362 "nvme_iov_md": false 00:11:30.362 }, 00:11:30.362 "memory_domains": [ 00:11:30.362 { 00:11:30.362 "dma_device_id": "system", 00:11:30.362 "dma_device_type": 1 00:11:30.362 }, 00:11:30.362 { 00:11:30.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.362 "dma_device_type": 2 00:11:30.362 }, 00:11:30.362 { 00:11:30.362 "dma_device_id": "system", 00:11:30.362 "dma_device_type": 1 00:11:30.362 }, 00:11:30.362 { 00:11:30.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.362 "dma_device_type": 2 00:11:30.362 }, 00:11:30.362 { 00:11:30.362 "dma_device_id": "system", 00:11:30.362 "dma_device_type": 1 00:11:30.362 }, 00:11:30.362 { 00:11:30.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.362 "dma_device_type": 2 00:11:30.362 }, 00:11:30.362 { 00:11:30.362 "dma_device_id": "system", 00:11:30.362 "dma_device_type": 1 00:11:30.362 }, 00:11:30.362 { 00:11:30.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.362 "dma_device_type": 2 00:11:30.362 } 00:11:30.362 ], 00:11:30.362 "driver_specific": { 00:11:30.362 "raid": { 00:11:30.362 "uuid": "f9e6c402-0f5d-4189-85d6-412b5458dfba", 00:11:30.362 "strip_size_kb": 64, 00:11:30.362 "state": "online", 00:11:30.362 "raid_level": "concat", 00:11:30.362 "superblock": true, 00:11:30.362 "num_base_bdevs": 4, 00:11:30.362 "num_base_bdevs_discovered": 4, 00:11:30.362 "num_base_bdevs_operational": 4, 00:11:30.362 "base_bdevs_list": [ 00:11:30.362 { 00:11:30.362 "name": "BaseBdev1", 00:11:30.362 "uuid": "a2663919-fe30-4835-b497-6b86eecfc69b", 00:11:30.362 "is_configured": true, 00:11:30.362 "data_offset": 2048, 00:11:30.362 "data_size": 63488 00:11:30.362 }, 00:11:30.362 { 00:11:30.362 "name": "BaseBdev2", 00:11:30.362 "uuid": "d058db5e-a8e1-46e9-b174-cce9507af488", 00:11:30.362 "is_configured": true, 00:11:30.362 "data_offset": 2048, 00:11:30.362 "data_size": 63488 00:11:30.362 }, 00:11:30.362 { 00:11:30.362 "name": "BaseBdev3", 00:11:30.362 "uuid": "04a83a64-c145-430f-b6d4-4ea266674a9e", 00:11:30.362 "is_configured": true, 00:11:30.362 "data_offset": 2048, 00:11:30.362 "data_size": 63488 00:11:30.362 }, 00:11:30.362 { 00:11:30.362 "name": "BaseBdev4", 00:11:30.362 "uuid": "88600492-272b-4a2a-a22f-487c0c75d51e", 00:11:30.362 "is_configured": true, 00:11:30.362 "data_offset": 2048, 00:11:30.362 "data_size": 63488 00:11:30.362 } 00:11:30.362 ] 00:11:30.362 } 00:11:30.362 } 00:11:30.362 }' 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:30.362 BaseBdev2 00:11:30.362 BaseBdev3 00:11:30.362 BaseBdev4' 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.362 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.622 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.622 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.622 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.622 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:30.622 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.622 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.622 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.622 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.622 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.622 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.622 09:30:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:30.622 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.622 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.622 [2024-11-15 09:30:18.892449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:30.622 [2024-11-15 09:30:18.892493] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:30.622 [2024-11-15 09:30:18.892569] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:30.622 09:30:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.622 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:30.622 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:30.622 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:30.622 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:30.622 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:30.622 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:30.622 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.622 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:30.622 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.622 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.622 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.622 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.622 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.622 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.622 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.622 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.622 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.622 09:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.622 09:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.622 09:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.622 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.622 "name": "Existed_Raid", 00:11:30.622 "uuid": "f9e6c402-0f5d-4189-85d6-412b5458dfba", 00:11:30.622 "strip_size_kb": 64, 00:11:30.622 "state": "offline", 00:11:30.622 "raid_level": "concat", 00:11:30.622 "superblock": true, 00:11:30.622 "num_base_bdevs": 4, 00:11:30.622 "num_base_bdevs_discovered": 3, 00:11:30.622 "num_base_bdevs_operational": 3, 00:11:30.622 "base_bdevs_list": [ 00:11:30.622 { 00:11:30.622 "name": null, 00:11:30.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.622 "is_configured": false, 00:11:30.622 "data_offset": 0, 00:11:30.622 "data_size": 63488 00:11:30.622 }, 00:11:30.622 { 00:11:30.622 "name": "BaseBdev2", 00:11:30.622 "uuid": "d058db5e-a8e1-46e9-b174-cce9507af488", 00:11:30.622 "is_configured": true, 00:11:30.622 "data_offset": 2048, 00:11:30.622 "data_size": 63488 00:11:30.622 }, 00:11:30.622 { 00:11:30.622 "name": "BaseBdev3", 00:11:30.622 "uuid": "04a83a64-c145-430f-b6d4-4ea266674a9e", 00:11:30.622 "is_configured": true, 00:11:30.622 "data_offset": 2048, 00:11:30.622 "data_size": 63488 00:11:30.622 }, 00:11:30.622 { 00:11:30.622 "name": "BaseBdev4", 00:11:30.622 "uuid": "88600492-272b-4a2a-a22f-487c0c75d51e", 00:11:30.622 "is_configured": true, 00:11:30.622 "data_offset": 2048, 00:11:30.623 "data_size": 63488 00:11:30.623 } 00:11:30.623 ] 00:11:30.623 }' 00:11:30.623 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.623 09:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.192 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:31.192 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:31.192 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.192 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:31.192 09:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.192 09:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.192 09:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.192 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:31.192 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:31.192 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:31.192 09:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.192 09:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.192 [2024-11-15 09:30:19.566794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:31.451 09:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.451 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:31.451 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:31.451 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:31.451 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.451 09:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.451 09:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.451 09:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.451 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:31.451 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:31.451 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:31.451 09:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.451 09:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.451 [2024-11-15 09:30:19.735389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:31.451 09:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.451 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:31.451 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:31.451 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.451 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:31.451 09:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.451 09:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.452 09:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.452 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:31.452 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:31.452 09:30:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:31.452 09:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.452 09:30:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.711 [2024-11-15 09:30:19.919993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:31.711 [2024-11-15 09:30:19.920097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.711 BaseBdev2 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.711 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.711 [ 00:11:31.711 { 00:11:31.711 "name": "BaseBdev2", 00:11:31.712 "aliases": [ 00:11:31.712 "6254fbd2-c934-4295-8956-76956c183402" 00:11:31.712 ], 00:11:31.712 "product_name": "Malloc disk", 00:11:31.712 "block_size": 512, 00:11:31.712 "num_blocks": 65536, 00:11:31.712 "uuid": "6254fbd2-c934-4295-8956-76956c183402", 00:11:31.712 "assigned_rate_limits": { 00:11:31.712 "rw_ios_per_sec": 0, 00:11:31.712 "rw_mbytes_per_sec": 0, 00:11:31.712 "r_mbytes_per_sec": 0, 00:11:31.712 "w_mbytes_per_sec": 0 00:11:31.712 }, 00:11:31.712 "claimed": false, 00:11:31.712 "zoned": false, 00:11:31.712 "supported_io_types": { 00:11:31.712 "read": true, 00:11:31.712 "write": true, 00:11:31.712 "unmap": true, 00:11:31.712 "flush": true, 00:11:31.712 "reset": true, 00:11:31.712 "nvme_admin": false, 00:11:31.712 "nvme_io": false, 00:11:31.712 "nvme_io_md": false, 00:11:31.712 "write_zeroes": true, 00:11:31.712 "zcopy": true, 00:11:31.712 "get_zone_info": false, 00:11:31.712 "zone_management": false, 00:11:31.712 "zone_append": false, 00:11:31.712 "compare": false, 00:11:31.712 "compare_and_write": false, 00:11:31.712 "abort": true, 00:11:31.712 "seek_hole": false, 00:11:31.712 "seek_data": false, 00:11:31.712 "copy": true, 00:11:31.712 "nvme_iov_md": false 00:11:31.712 }, 00:11:31.712 "memory_domains": [ 00:11:31.712 { 00:11:31.712 "dma_device_id": "system", 00:11:31.712 "dma_device_type": 1 00:11:31.712 }, 00:11:31.712 { 00:11:31.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.712 "dma_device_type": 2 00:11:31.712 } 00:11:31.712 ], 00:11:31.712 "driver_specific": {} 00:11:31.712 } 00:11:31.712 ] 00:11:31.712 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.712 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:31.712 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:31.712 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.712 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:31.712 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.712 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.971 BaseBdev3 00:11:31.971 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.971 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:31.971 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:31.971 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:31.971 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:31.971 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:31.971 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:31.971 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:31.971 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.971 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.971 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.971 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:31.971 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.971 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.971 [ 00:11:31.971 { 00:11:31.971 "name": "BaseBdev3", 00:11:31.971 "aliases": [ 00:11:31.971 "6e0c68df-8d7a-46a3-8fb1-36bbf16aab06" 00:11:31.971 ], 00:11:31.971 "product_name": "Malloc disk", 00:11:31.971 "block_size": 512, 00:11:31.971 "num_blocks": 65536, 00:11:31.971 "uuid": "6e0c68df-8d7a-46a3-8fb1-36bbf16aab06", 00:11:31.971 "assigned_rate_limits": { 00:11:31.971 "rw_ios_per_sec": 0, 00:11:31.971 "rw_mbytes_per_sec": 0, 00:11:31.971 "r_mbytes_per_sec": 0, 00:11:31.971 "w_mbytes_per_sec": 0 00:11:31.971 }, 00:11:31.971 "claimed": false, 00:11:31.971 "zoned": false, 00:11:31.971 "supported_io_types": { 00:11:31.971 "read": true, 00:11:31.971 "write": true, 00:11:31.971 "unmap": true, 00:11:31.971 "flush": true, 00:11:31.972 "reset": true, 00:11:31.972 "nvme_admin": false, 00:11:31.972 "nvme_io": false, 00:11:31.972 "nvme_io_md": false, 00:11:31.972 "write_zeroes": true, 00:11:31.972 "zcopy": true, 00:11:31.972 "get_zone_info": false, 00:11:31.972 "zone_management": false, 00:11:31.972 "zone_append": false, 00:11:31.972 "compare": false, 00:11:31.972 "compare_and_write": false, 00:11:31.972 "abort": true, 00:11:31.972 "seek_hole": false, 00:11:31.972 "seek_data": false, 00:11:31.972 "copy": true, 00:11:31.972 "nvme_iov_md": false 00:11:31.972 }, 00:11:31.972 "memory_domains": [ 00:11:31.972 { 00:11:31.972 "dma_device_id": "system", 00:11:31.972 "dma_device_type": 1 00:11:31.972 }, 00:11:31.972 { 00:11:31.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.972 "dma_device_type": 2 00:11:31.972 } 00:11:31.972 ], 00:11:31.972 "driver_specific": {} 00:11:31.972 } 00:11:31.972 ] 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.972 BaseBdev4 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.972 [ 00:11:31.972 { 00:11:31.972 "name": "BaseBdev4", 00:11:31.972 "aliases": [ 00:11:31.972 "8d7af84e-cc4f-413a-8914-5c712c3d1266" 00:11:31.972 ], 00:11:31.972 "product_name": "Malloc disk", 00:11:31.972 "block_size": 512, 00:11:31.972 "num_blocks": 65536, 00:11:31.972 "uuid": "8d7af84e-cc4f-413a-8914-5c712c3d1266", 00:11:31.972 "assigned_rate_limits": { 00:11:31.972 "rw_ios_per_sec": 0, 00:11:31.972 "rw_mbytes_per_sec": 0, 00:11:31.972 "r_mbytes_per_sec": 0, 00:11:31.972 "w_mbytes_per_sec": 0 00:11:31.972 }, 00:11:31.972 "claimed": false, 00:11:31.972 "zoned": false, 00:11:31.972 "supported_io_types": { 00:11:31.972 "read": true, 00:11:31.972 "write": true, 00:11:31.972 "unmap": true, 00:11:31.972 "flush": true, 00:11:31.972 "reset": true, 00:11:31.972 "nvme_admin": false, 00:11:31.972 "nvme_io": false, 00:11:31.972 "nvme_io_md": false, 00:11:31.972 "write_zeroes": true, 00:11:31.972 "zcopy": true, 00:11:31.972 "get_zone_info": false, 00:11:31.972 "zone_management": false, 00:11:31.972 "zone_append": false, 00:11:31.972 "compare": false, 00:11:31.972 "compare_and_write": false, 00:11:31.972 "abort": true, 00:11:31.972 "seek_hole": false, 00:11:31.972 "seek_data": false, 00:11:31.972 "copy": true, 00:11:31.972 "nvme_iov_md": false 00:11:31.972 }, 00:11:31.972 "memory_domains": [ 00:11:31.972 { 00:11:31.972 "dma_device_id": "system", 00:11:31.972 "dma_device_type": 1 00:11:31.972 }, 00:11:31.972 { 00:11:31.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.972 "dma_device_type": 2 00:11:31.972 } 00:11:31.972 ], 00:11:31.972 "driver_specific": {} 00:11:31.972 } 00:11:31.972 ] 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.972 [2024-11-15 09:30:20.355867] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:31.972 [2024-11-15 09:30:20.355947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:31.972 [2024-11-15 09:30:20.355988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:31.972 [2024-11-15 09:30:20.358232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:31.972 [2024-11-15 09:30:20.358310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.972 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.972 "name": "Existed_Raid", 00:11:31.972 "uuid": "472f4314-9bdc-4087-b7bb-b13ed55e3a8c", 00:11:31.972 "strip_size_kb": 64, 00:11:31.972 "state": "configuring", 00:11:31.972 "raid_level": "concat", 00:11:31.972 "superblock": true, 00:11:31.972 "num_base_bdevs": 4, 00:11:31.972 "num_base_bdevs_discovered": 3, 00:11:31.972 "num_base_bdevs_operational": 4, 00:11:31.972 "base_bdevs_list": [ 00:11:31.972 { 00:11:31.972 "name": "BaseBdev1", 00:11:31.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.972 "is_configured": false, 00:11:31.972 "data_offset": 0, 00:11:31.972 "data_size": 0 00:11:31.972 }, 00:11:31.972 { 00:11:31.972 "name": "BaseBdev2", 00:11:31.972 "uuid": "6254fbd2-c934-4295-8956-76956c183402", 00:11:31.972 "is_configured": true, 00:11:31.972 "data_offset": 2048, 00:11:31.972 "data_size": 63488 00:11:31.972 }, 00:11:31.972 { 00:11:31.972 "name": "BaseBdev3", 00:11:31.972 "uuid": "6e0c68df-8d7a-46a3-8fb1-36bbf16aab06", 00:11:31.972 "is_configured": true, 00:11:31.972 "data_offset": 2048, 00:11:31.973 "data_size": 63488 00:11:31.973 }, 00:11:31.973 { 00:11:31.973 "name": "BaseBdev4", 00:11:31.973 "uuid": "8d7af84e-cc4f-413a-8914-5c712c3d1266", 00:11:31.973 "is_configured": true, 00:11:31.973 "data_offset": 2048, 00:11:31.973 "data_size": 63488 00:11:31.973 } 00:11:31.973 ] 00:11:31.973 }' 00:11:31.973 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.973 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.540 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:32.540 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.540 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.540 [2024-11-15 09:30:20.823071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:32.540 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.540 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:32.540 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.540 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.540 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.540 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.540 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.540 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.540 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.540 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.540 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.540 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.540 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.540 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.540 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.540 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.540 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.540 "name": "Existed_Raid", 00:11:32.540 "uuid": "472f4314-9bdc-4087-b7bb-b13ed55e3a8c", 00:11:32.540 "strip_size_kb": 64, 00:11:32.540 "state": "configuring", 00:11:32.540 "raid_level": "concat", 00:11:32.540 "superblock": true, 00:11:32.540 "num_base_bdevs": 4, 00:11:32.540 "num_base_bdevs_discovered": 2, 00:11:32.540 "num_base_bdevs_operational": 4, 00:11:32.540 "base_bdevs_list": [ 00:11:32.540 { 00:11:32.540 "name": "BaseBdev1", 00:11:32.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.540 "is_configured": false, 00:11:32.540 "data_offset": 0, 00:11:32.540 "data_size": 0 00:11:32.540 }, 00:11:32.540 { 00:11:32.540 "name": null, 00:11:32.540 "uuid": "6254fbd2-c934-4295-8956-76956c183402", 00:11:32.540 "is_configured": false, 00:11:32.540 "data_offset": 0, 00:11:32.540 "data_size": 63488 00:11:32.540 }, 00:11:32.540 { 00:11:32.540 "name": "BaseBdev3", 00:11:32.540 "uuid": "6e0c68df-8d7a-46a3-8fb1-36bbf16aab06", 00:11:32.540 "is_configured": true, 00:11:32.540 "data_offset": 2048, 00:11:32.540 "data_size": 63488 00:11:32.540 }, 00:11:32.540 { 00:11:32.540 "name": "BaseBdev4", 00:11:32.540 "uuid": "8d7af84e-cc4f-413a-8914-5c712c3d1266", 00:11:32.540 "is_configured": true, 00:11:32.540 "data_offset": 2048, 00:11:32.540 "data_size": 63488 00:11:32.540 } 00:11:32.540 ] 00:11:32.540 }' 00:11:32.540 09:30:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.540 09:30:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.108 [2024-11-15 09:30:21.392655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:33.108 BaseBdev1 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.108 [ 00:11:33.108 { 00:11:33.108 "name": "BaseBdev1", 00:11:33.108 "aliases": [ 00:11:33.108 "dbdcc0f6-7112-436d-811c-f803beff4a6e" 00:11:33.108 ], 00:11:33.108 "product_name": "Malloc disk", 00:11:33.108 "block_size": 512, 00:11:33.108 "num_blocks": 65536, 00:11:33.108 "uuid": "dbdcc0f6-7112-436d-811c-f803beff4a6e", 00:11:33.108 "assigned_rate_limits": { 00:11:33.108 "rw_ios_per_sec": 0, 00:11:33.108 "rw_mbytes_per_sec": 0, 00:11:33.108 "r_mbytes_per_sec": 0, 00:11:33.108 "w_mbytes_per_sec": 0 00:11:33.108 }, 00:11:33.108 "claimed": true, 00:11:33.108 "claim_type": "exclusive_write", 00:11:33.108 "zoned": false, 00:11:33.108 "supported_io_types": { 00:11:33.108 "read": true, 00:11:33.108 "write": true, 00:11:33.108 "unmap": true, 00:11:33.108 "flush": true, 00:11:33.108 "reset": true, 00:11:33.108 "nvme_admin": false, 00:11:33.108 "nvme_io": false, 00:11:33.108 "nvme_io_md": false, 00:11:33.108 "write_zeroes": true, 00:11:33.108 "zcopy": true, 00:11:33.108 "get_zone_info": false, 00:11:33.108 "zone_management": false, 00:11:33.108 "zone_append": false, 00:11:33.108 "compare": false, 00:11:33.108 "compare_and_write": false, 00:11:33.108 "abort": true, 00:11:33.108 "seek_hole": false, 00:11:33.108 "seek_data": false, 00:11:33.108 "copy": true, 00:11:33.108 "nvme_iov_md": false 00:11:33.108 }, 00:11:33.108 "memory_domains": [ 00:11:33.108 { 00:11:33.108 "dma_device_id": "system", 00:11:33.108 "dma_device_type": 1 00:11:33.108 }, 00:11:33.108 { 00:11:33.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.108 "dma_device_type": 2 00:11:33.108 } 00:11:33.108 ], 00:11:33.108 "driver_specific": {} 00:11:33.108 } 00:11:33.108 ] 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.108 "name": "Existed_Raid", 00:11:33.108 "uuid": "472f4314-9bdc-4087-b7bb-b13ed55e3a8c", 00:11:33.108 "strip_size_kb": 64, 00:11:33.108 "state": "configuring", 00:11:33.108 "raid_level": "concat", 00:11:33.108 "superblock": true, 00:11:33.108 "num_base_bdevs": 4, 00:11:33.108 "num_base_bdevs_discovered": 3, 00:11:33.108 "num_base_bdevs_operational": 4, 00:11:33.108 "base_bdevs_list": [ 00:11:33.108 { 00:11:33.108 "name": "BaseBdev1", 00:11:33.108 "uuid": "dbdcc0f6-7112-436d-811c-f803beff4a6e", 00:11:33.108 "is_configured": true, 00:11:33.108 "data_offset": 2048, 00:11:33.108 "data_size": 63488 00:11:33.108 }, 00:11:33.108 { 00:11:33.108 "name": null, 00:11:33.108 "uuid": "6254fbd2-c934-4295-8956-76956c183402", 00:11:33.108 "is_configured": false, 00:11:33.108 "data_offset": 0, 00:11:33.108 "data_size": 63488 00:11:33.108 }, 00:11:33.108 { 00:11:33.108 "name": "BaseBdev3", 00:11:33.108 "uuid": "6e0c68df-8d7a-46a3-8fb1-36bbf16aab06", 00:11:33.108 "is_configured": true, 00:11:33.108 "data_offset": 2048, 00:11:33.108 "data_size": 63488 00:11:33.108 }, 00:11:33.108 { 00:11:33.108 "name": "BaseBdev4", 00:11:33.108 "uuid": "8d7af84e-cc4f-413a-8914-5c712c3d1266", 00:11:33.108 "is_configured": true, 00:11:33.108 "data_offset": 2048, 00:11:33.108 "data_size": 63488 00:11:33.108 } 00:11:33.108 ] 00:11:33.108 }' 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.108 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.676 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.676 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.676 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.677 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:33.677 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.677 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:33.677 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:33.677 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.677 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.677 [2024-11-15 09:30:21.951911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:33.677 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.677 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:33.677 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.677 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.677 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.677 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.677 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.677 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.677 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.677 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.677 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.677 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.677 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.677 09:30:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.677 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.677 09:30:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.677 09:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.677 "name": "Existed_Raid", 00:11:33.677 "uuid": "472f4314-9bdc-4087-b7bb-b13ed55e3a8c", 00:11:33.677 "strip_size_kb": 64, 00:11:33.677 "state": "configuring", 00:11:33.677 "raid_level": "concat", 00:11:33.677 "superblock": true, 00:11:33.677 "num_base_bdevs": 4, 00:11:33.677 "num_base_bdevs_discovered": 2, 00:11:33.677 "num_base_bdevs_operational": 4, 00:11:33.677 "base_bdevs_list": [ 00:11:33.677 { 00:11:33.677 "name": "BaseBdev1", 00:11:33.677 "uuid": "dbdcc0f6-7112-436d-811c-f803beff4a6e", 00:11:33.677 "is_configured": true, 00:11:33.677 "data_offset": 2048, 00:11:33.677 "data_size": 63488 00:11:33.677 }, 00:11:33.677 { 00:11:33.677 "name": null, 00:11:33.677 "uuid": "6254fbd2-c934-4295-8956-76956c183402", 00:11:33.677 "is_configured": false, 00:11:33.677 "data_offset": 0, 00:11:33.677 "data_size": 63488 00:11:33.677 }, 00:11:33.677 { 00:11:33.677 "name": null, 00:11:33.677 "uuid": "6e0c68df-8d7a-46a3-8fb1-36bbf16aab06", 00:11:33.677 "is_configured": false, 00:11:33.677 "data_offset": 0, 00:11:33.677 "data_size": 63488 00:11:33.677 }, 00:11:33.677 { 00:11:33.677 "name": "BaseBdev4", 00:11:33.677 "uuid": "8d7af84e-cc4f-413a-8914-5c712c3d1266", 00:11:33.677 "is_configured": true, 00:11:33.677 "data_offset": 2048, 00:11:33.677 "data_size": 63488 00:11:33.677 } 00:11:33.677 ] 00:11:33.677 }' 00:11:33.677 09:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.677 09:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.245 [2024-11-15 09:30:22.479041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.245 09:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.245 "name": "Existed_Raid", 00:11:34.245 "uuid": "472f4314-9bdc-4087-b7bb-b13ed55e3a8c", 00:11:34.245 "strip_size_kb": 64, 00:11:34.245 "state": "configuring", 00:11:34.245 "raid_level": "concat", 00:11:34.245 "superblock": true, 00:11:34.245 "num_base_bdevs": 4, 00:11:34.245 "num_base_bdevs_discovered": 3, 00:11:34.245 "num_base_bdevs_operational": 4, 00:11:34.245 "base_bdevs_list": [ 00:11:34.245 { 00:11:34.245 "name": "BaseBdev1", 00:11:34.245 "uuid": "dbdcc0f6-7112-436d-811c-f803beff4a6e", 00:11:34.245 "is_configured": true, 00:11:34.245 "data_offset": 2048, 00:11:34.245 "data_size": 63488 00:11:34.245 }, 00:11:34.245 { 00:11:34.245 "name": null, 00:11:34.245 "uuid": "6254fbd2-c934-4295-8956-76956c183402", 00:11:34.245 "is_configured": false, 00:11:34.245 "data_offset": 0, 00:11:34.245 "data_size": 63488 00:11:34.245 }, 00:11:34.245 { 00:11:34.245 "name": "BaseBdev3", 00:11:34.245 "uuid": "6e0c68df-8d7a-46a3-8fb1-36bbf16aab06", 00:11:34.245 "is_configured": true, 00:11:34.245 "data_offset": 2048, 00:11:34.245 "data_size": 63488 00:11:34.245 }, 00:11:34.245 { 00:11:34.245 "name": "BaseBdev4", 00:11:34.245 "uuid": "8d7af84e-cc4f-413a-8914-5c712c3d1266", 00:11:34.245 "is_configured": true, 00:11:34.245 "data_offset": 2048, 00:11:34.246 "data_size": 63488 00:11:34.246 } 00:11:34.246 ] 00:11:34.246 }' 00:11:34.246 09:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.246 09:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.505 09:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:34.505 09:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.505 09:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.505 09:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.764 09:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.764 09:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:34.764 09:30:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:34.764 09:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.764 09:30:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.764 [2024-11-15 09:30:22.994189] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:34.764 09:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.764 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:34.764 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.764 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.764 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.764 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.764 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.764 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.764 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.764 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.764 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.764 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.764 09:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.764 09:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.764 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.764 09:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.764 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.764 "name": "Existed_Raid", 00:11:34.764 "uuid": "472f4314-9bdc-4087-b7bb-b13ed55e3a8c", 00:11:34.764 "strip_size_kb": 64, 00:11:34.764 "state": "configuring", 00:11:34.764 "raid_level": "concat", 00:11:34.764 "superblock": true, 00:11:34.764 "num_base_bdevs": 4, 00:11:34.764 "num_base_bdevs_discovered": 2, 00:11:34.764 "num_base_bdevs_operational": 4, 00:11:34.764 "base_bdevs_list": [ 00:11:34.764 { 00:11:34.764 "name": null, 00:11:34.764 "uuid": "dbdcc0f6-7112-436d-811c-f803beff4a6e", 00:11:34.764 "is_configured": false, 00:11:34.764 "data_offset": 0, 00:11:34.764 "data_size": 63488 00:11:34.764 }, 00:11:34.764 { 00:11:34.764 "name": null, 00:11:34.764 "uuid": "6254fbd2-c934-4295-8956-76956c183402", 00:11:34.764 "is_configured": false, 00:11:34.764 "data_offset": 0, 00:11:34.764 "data_size": 63488 00:11:34.764 }, 00:11:34.764 { 00:11:34.764 "name": "BaseBdev3", 00:11:34.764 "uuid": "6e0c68df-8d7a-46a3-8fb1-36bbf16aab06", 00:11:34.764 "is_configured": true, 00:11:34.764 "data_offset": 2048, 00:11:34.764 "data_size": 63488 00:11:34.764 }, 00:11:34.764 { 00:11:34.764 "name": "BaseBdev4", 00:11:34.764 "uuid": "8d7af84e-cc4f-413a-8914-5c712c3d1266", 00:11:34.764 "is_configured": true, 00:11:34.764 "data_offset": 2048, 00:11:34.764 "data_size": 63488 00:11:34.764 } 00:11:34.764 ] 00:11:34.764 }' 00:11:34.764 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.764 09:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.332 [2024-11-15 09:30:23.613900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.332 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.332 "name": "Existed_Raid", 00:11:35.332 "uuid": "472f4314-9bdc-4087-b7bb-b13ed55e3a8c", 00:11:35.332 "strip_size_kb": 64, 00:11:35.332 "state": "configuring", 00:11:35.332 "raid_level": "concat", 00:11:35.332 "superblock": true, 00:11:35.332 "num_base_bdevs": 4, 00:11:35.332 "num_base_bdevs_discovered": 3, 00:11:35.332 "num_base_bdevs_operational": 4, 00:11:35.332 "base_bdevs_list": [ 00:11:35.332 { 00:11:35.332 "name": null, 00:11:35.332 "uuid": "dbdcc0f6-7112-436d-811c-f803beff4a6e", 00:11:35.332 "is_configured": false, 00:11:35.332 "data_offset": 0, 00:11:35.332 "data_size": 63488 00:11:35.332 }, 00:11:35.332 { 00:11:35.333 "name": "BaseBdev2", 00:11:35.333 "uuid": "6254fbd2-c934-4295-8956-76956c183402", 00:11:35.333 "is_configured": true, 00:11:35.333 "data_offset": 2048, 00:11:35.333 "data_size": 63488 00:11:35.333 }, 00:11:35.333 { 00:11:35.333 "name": "BaseBdev3", 00:11:35.333 "uuid": "6e0c68df-8d7a-46a3-8fb1-36bbf16aab06", 00:11:35.333 "is_configured": true, 00:11:35.333 "data_offset": 2048, 00:11:35.333 "data_size": 63488 00:11:35.333 }, 00:11:35.333 { 00:11:35.333 "name": "BaseBdev4", 00:11:35.333 "uuid": "8d7af84e-cc4f-413a-8914-5c712c3d1266", 00:11:35.333 "is_configured": true, 00:11:35.333 "data_offset": 2048, 00:11:35.333 "data_size": 63488 00:11:35.333 } 00:11:35.333 ] 00:11:35.333 }' 00:11:35.333 09:30:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.333 09:30:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.897 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:35.897 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.897 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.897 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.897 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.897 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:35.897 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.897 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:35.897 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.897 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.897 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.897 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u dbdcc0f6-7112-436d-811c-f803beff4a6e 00:11:35.897 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.897 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.897 [2024-11-15 09:30:24.198162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:35.897 [2024-11-15 09:30:24.198404] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:35.898 [2024-11-15 09:30:24.198417] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:35.898 [2024-11-15 09:30:24.198734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:35.898 [2024-11-15 09:30:24.198933] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:35.898 [2024-11-15 09:30:24.198957] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:35.898 [2024-11-15 09:30:24.199103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.898 NewBaseBdev 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.898 [ 00:11:35.898 { 00:11:35.898 "name": "NewBaseBdev", 00:11:35.898 "aliases": [ 00:11:35.898 "dbdcc0f6-7112-436d-811c-f803beff4a6e" 00:11:35.898 ], 00:11:35.898 "product_name": "Malloc disk", 00:11:35.898 "block_size": 512, 00:11:35.898 "num_blocks": 65536, 00:11:35.898 "uuid": "dbdcc0f6-7112-436d-811c-f803beff4a6e", 00:11:35.898 "assigned_rate_limits": { 00:11:35.898 "rw_ios_per_sec": 0, 00:11:35.898 "rw_mbytes_per_sec": 0, 00:11:35.898 "r_mbytes_per_sec": 0, 00:11:35.898 "w_mbytes_per_sec": 0 00:11:35.898 }, 00:11:35.898 "claimed": true, 00:11:35.898 "claim_type": "exclusive_write", 00:11:35.898 "zoned": false, 00:11:35.898 "supported_io_types": { 00:11:35.898 "read": true, 00:11:35.898 "write": true, 00:11:35.898 "unmap": true, 00:11:35.898 "flush": true, 00:11:35.898 "reset": true, 00:11:35.898 "nvme_admin": false, 00:11:35.898 "nvme_io": false, 00:11:35.898 "nvme_io_md": false, 00:11:35.898 "write_zeroes": true, 00:11:35.898 "zcopy": true, 00:11:35.898 "get_zone_info": false, 00:11:35.898 "zone_management": false, 00:11:35.898 "zone_append": false, 00:11:35.898 "compare": false, 00:11:35.898 "compare_and_write": false, 00:11:35.898 "abort": true, 00:11:35.898 "seek_hole": false, 00:11:35.898 "seek_data": false, 00:11:35.898 "copy": true, 00:11:35.898 "nvme_iov_md": false 00:11:35.898 }, 00:11:35.898 "memory_domains": [ 00:11:35.898 { 00:11:35.898 "dma_device_id": "system", 00:11:35.898 "dma_device_type": 1 00:11:35.898 }, 00:11:35.898 { 00:11:35.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.898 "dma_device_type": 2 00:11:35.898 } 00:11:35.898 ], 00:11:35.898 "driver_specific": {} 00:11:35.898 } 00:11:35.898 ] 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.898 "name": "Existed_Raid", 00:11:35.898 "uuid": "472f4314-9bdc-4087-b7bb-b13ed55e3a8c", 00:11:35.898 "strip_size_kb": 64, 00:11:35.898 "state": "online", 00:11:35.898 "raid_level": "concat", 00:11:35.898 "superblock": true, 00:11:35.898 "num_base_bdevs": 4, 00:11:35.898 "num_base_bdevs_discovered": 4, 00:11:35.898 "num_base_bdevs_operational": 4, 00:11:35.898 "base_bdevs_list": [ 00:11:35.898 { 00:11:35.898 "name": "NewBaseBdev", 00:11:35.898 "uuid": "dbdcc0f6-7112-436d-811c-f803beff4a6e", 00:11:35.898 "is_configured": true, 00:11:35.898 "data_offset": 2048, 00:11:35.898 "data_size": 63488 00:11:35.898 }, 00:11:35.898 { 00:11:35.898 "name": "BaseBdev2", 00:11:35.898 "uuid": "6254fbd2-c934-4295-8956-76956c183402", 00:11:35.898 "is_configured": true, 00:11:35.898 "data_offset": 2048, 00:11:35.898 "data_size": 63488 00:11:35.898 }, 00:11:35.898 { 00:11:35.898 "name": "BaseBdev3", 00:11:35.898 "uuid": "6e0c68df-8d7a-46a3-8fb1-36bbf16aab06", 00:11:35.898 "is_configured": true, 00:11:35.898 "data_offset": 2048, 00:11:35.898 "data_size": 63488 00:11:35.898 }, 00:11:35.898 { 00:11:35.898 "name": "BaseBdev4", 00:11:35.898 "uuid": "8d7af84e-cc4f-413a-8914-5c712c3d1266", 00:11:35.898 "is_configured": true, 00:11:35.898 "data_offset": 2048, 00:11:35.898 "data_size": 63488 00:11:35.898 } 00:11:35.898 ] 00:11:35.898 }' 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.898 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.464 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:36.464 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:36.464 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.464 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.464 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.464 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.464 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.464 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:36.464 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.464 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.464 [2024-11-15 09:30:24.685849] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.464 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.464 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.464 "name": "Existed_Raid", 00:11:36.464 "aliases": [ 00:11:36.464 "472f4314-9bdc-4087-b7bb-b13ed55e3a8c" 00:11:36.464 ], 00:11:36.464 "product_name": "Raid Volume", 00:11:36.464 "block_size": 512, 00:11:36.464 "num_blocks": 253952, 00:11:36.464 "uuid": "472f4314-9bdc-4087-b7bb-b13ed55e3a8c", 00:11:36.464 "assigned_rate_limits": { 00:11:36.464 "rw_ios_per_sec": 0, 00:11:36.464 "rw_mbytes_per_sec": 0, 00:11:36.464 "r_mbytes_per_sec": 0, 00:11:36.464 "w_mbytes_per_sec": 0 00:11:36.464 }, 00:11:36.464 "claimed": false, 00:11:36.464 "zoned": false, 00:11:36.464 "supported_io_types": { 00:11:36.464 "read": true, 00:11:36.464 "write": true, 00:11:36.464 "unmap": true, 00:11:36.464 "flush": true, 00:11:36.464 "reset": true, 00:11:36.464 "nvme_admin": false, 00:11:36.464 "nvme_io": false, 00:11:36.464 "nvme_io_md": false, 00:11:36.464 "write_zeroes": true, 00:11:36.464 "zcopy": false, 00:11:36.464 "get_zone_info": false, 00:11:36.465 "zone_management": false, 00:11:36.465 "zone_append": false, 00:11:36.465 "compare": false, 00:11:36.465 "compare_and_write": false, 00:11:36.465 "abort": false, 00:11:36.465 "seek_hole": false, 00:11:36.465 "seek_data": false, 00:11:36.465 "copy": false, 00:11:36.465 "nvme_iov_md": false 00:11:36.465 }, 00:11:36.465 "memory_domains": [ 00:11:36.465 { 00:11:36.465 "dma_device_id": "system", 00:11:36.465 "dma_device_type": 1 00:11:36.465 }, 00:11:36.465 { 00:11:36.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.465 "dma_device_type": 2 00:11:36.465 }, 00:11:36.465 { 00:11:36.465 "dma_device_id": "system", 00:11:36.465 "dma_device_type": 1 00:11:36.465 }, 00:11:36.465 { 00:11:36.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.465 "dma_device_type": 2 00:11:36.465 }, 00:11:36.465 { 00:11:36.465 "dma_device_id": "system", 00:11:36.465 "dma_device_type": 1 00:11:36.465 }, 00:11:36.465 { 00:11:36.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.465 "dma_device_type": 2 00:11:36.465 }, 00:11:36.465 { 00:11:36.465 "dma_device_id": "system", 00:11:36.465 "dma_device_type": 1 00:11:36.465 }, 00:11:36.465 { 00:11:36.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.465 "dma_device_type": 2 00:11:36.465 } 00:11:36.465 ], 00:11:36.465 "driver_specific": { 00:11:36.465 "raid": { 00:11:36.465 "uuid": "472f4314-9bdc-4087-b7bb-b13ed55e3a8c", 00:11:36.465 "strip_size_kb": 64, 00:11:36.465 "state": "online", 00:11:36.465 "raid_level": "concat", 00:11:36.465 "superblock": true, 00:11:36.465 "num_base_bdevs": 4, 00:11:36.465 "num_base_bdevs_discovered": 4, 00:11:36.465 "num_base_bdevs_operational": 4, 00:11:36.465 "base_bdevs_list": [ 00:11:36.465 { 00:11:36.465 "name": "NewBaseBdev", 00:11:36.465 "uuid": "dbdcc0f6-7112-436d-811c-f803beff4a6e", 00:11:36.465 "is_configured": true, 00:11:36.465 "data_offset": 2048, 00:11:36.465 "data_size": 63488 00:11:36.465 }, 00:11:36.465 { 00:11:36.465 "name": "BaseBdev2", 00:11:36.465 "uuid": "6254fbd2-c934-4295-8956-76956c183402", 00:11:36.465 "is_configured": true, 00:11:36.465 "data_offset": 2048, 00:11:36.465 "data_size": 63488 00:11:36.465 }, 00:11:36.465 { 00:11:36.465 "name": "BaseBdev3", 00:11:36.465 "uuid": "6e0c68df-8d7a-46a3-8fb1-36bbf16aab06", 00:11:36.465 "is_configured": true, 00:11:36.465 "data_offset": 2048, 00:11:36.465 "data_size": 63488 00:11:36.465 }, 00:11:36.465 { 00:11:36.465 "name": "BaseBdev4", 00:11:36.465 "uuid": "8d7af84e-cc4f-413a-8914-5c712c3d1266", 00:11:36.465 "is_configured": true, 00:11:36.465 "data_offset": 2048, 00:11:36.465 "data_size": 63488 00:11:36.465 } 00:11:36.465 ] 00:11:36.465 } 00:11:36.465 } 00:11:36.465 }' 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:36.465 BaseBdev2 00:11:36.465 BaseBdev3 00:11:36.465 BaseBdev4' 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.465 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.723 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:36.723 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.723 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.723 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.723 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.723 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.723 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.723 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:36.723 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.723 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.723 [2024-11-15 09:30:24.980979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:36.723 [2024-11-15 09:30:24.981088] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.723 [2024-11-15 09:30:24.981207] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.723 [2024-11-15 09:30:24.981329] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.723 [2024-11-15 09:30:24.981385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:36.723 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.723 09:30:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72328 00:11:36.723 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 72328 ']' 00:11:36.723 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 72328 00:11:36.723 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:11:36.723 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:36.723 09:30:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72328 00:11:36.723 09:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:36.723 09:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:36.723 09:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72328' 00:11:36.723 killing process with pid 72328 00:11:36.723 09:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 72328 00:11:36.723 [2024-11-15 09:30:25.024468] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:36.723 09:30:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 72328 00:11:37.288 [2024-11-15 09:30:25.512557] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:38.662 09:30:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:38.662 00:11:38.662 real 0m12.396s 00:11:38.662 user 0m19.296s 00:11:38.662 sys 0m2.297s 00:11:38.663 09:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:38.663 09:30:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.663 ************************************ 00:11:38.663 END TEST raid_state_function_test_sb 00:11:38.663 ************************************ 00:11:38.663 09:30:26 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:38.663 09:30:26 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:38.663 09:30:26 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:38.663 09:30:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:38.663 ************************************ 00:11:38.663 START TEST raid_superblock_test 00:11:38.663 ************************************ 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 4 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73004 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73004 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 73004 ']' 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:38.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:38.663 09:30:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.663 [2024-11-15 09:30:27.014500] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:11:38.663 [2024-11-15 09:30:27.014639] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73004 ] 00:11:38.921 [2024-11-15 09:30:27.197230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.921 [2024-11-15 09:30:27.332788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.178 [2024-11-15 09:30:27.577118] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.178 [2024-11-15 09:30:27.577199] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.743 malloc1 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.743 [2024-11-15 09:30:27.961091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:39.743 [2024-11-15 09:30:27.961172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.743 [2024-11-15 09:30:27.961198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:39.743 [2024-11-15 09:30:27.961209] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.743 [2024-11-15 09:30:27.963453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.743 [2024-11-15 09:30:27.963493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:39.743 pt1 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.743 09:30:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.743 malloc2 00:11:39.743 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.743 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:39.743 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.743 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.743 [2024-11-15 09:30:28.021051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:39.743 [2024-11-15 09:30:28.021116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.743 [2024-11-15 09:30:28.021141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:39.743 [2024-11-15 09:30:28.021152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.743 [2024-11-15 09:30:28.023387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.743 [2024-11-15 09:30:28.023425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:39.743 pt2 00:11:39.743 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.743 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:39.743 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:39.743 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:39.743 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:39.743 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:39.743 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:39.743 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:39.743 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:39.743 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:39.743 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.743 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.743 malloc3 00:11:39.743 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.743 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:39.743 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.743 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.743 [2024-11-15 09:30:28.086824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:39.743 [2024-11-15 09:30:28.086892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.743 [2024-11-15 09:30:28.086912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:39.743 [2024-11-15 09:30:28.086922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.743 [2024-11-15 09:30:28.089121] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.744 [2024-11-15 09:30:28.089162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:39.744 pt3 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.744 malloc4 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.744 [2024-11-15 09:30:28.140289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:39.744 [2024-11-15 09:30:28.140364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.744 [2024-11-15 09:30:28.140388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:39.744 [2024-11-15 09:30:28.140401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.744 [2024-11-15 09:30:28.143064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.744 [2024-11-15 09:30:28.143105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:39.744 pt4 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.744 [2024-11-15 09:30:28.148299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:39.744 [2024-11-15 09:30:28.150349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:39.744 [2024-11-15 09:30:28.150421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:39.744 [2024-11-15 09:30:28.150488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:39.744 [2024-11-15 09:30:28.150697] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:39.744 [2024-11-15 09:30:28.150718] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:39.744 [2024-11-15 09:30:28.151014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:39.744 [2024-11-15 09:30:28.151227] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:39.744 [2024-11-15 09:30:28.151251] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:39.744 [2024-11-15 09:30:28.151438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.744 "name": "raid_bdev1", 00:11:39.744 "uuid": "ad03fe53-08f6-441d-a032-5b57b23fdd76", 00:11:39.744 "strip_size_kb": 64, 00:11:39.744 "state": "online", 00:11:39.744 "raid_level": "concat", 00:11:39.744 "superblock": true, 00:11:39.744 "num_base_bdevs": 4, 00:11:39.744 "num_base_bdevs_discovered": 4, 00:11:39.744 "num_base_bdevs_operational": 4, 00:11:39.744 "base_bdevs_list": [ 00:11:39.744 { 00:11:39.744 "name": "pt1", 00:11:39.744 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:39.744 "is_configured": true, 00:11:39.744 "data_offset": 2048, 00:11:39.744 "data_size": 63488 00:11:39.744 }, 00:11:39.744 { 00:11:39.744 "name": "pt2", 00:11:39.744 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:39.744 "is_configured": true, 00:11:39.744 "data_offset": 2048, 00:11:39.744 "data_size": 63488 00:11:39.744 }, 00:11:39.744 { 00:11:39.744 "name": "pt3", 00:11:39.744 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:39.744 "is_configured": true, 00:11:39.744 "data_offset": 2048, 00:11:39.744 "data_size": 63488 00:11:39.744 }, 00:11:39.744 { 00:11:39.744 "name": "pt4", 00:11:39.744 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:39.744 "is_configured": true, 00:11:39.744 "data_offset": 2048, 00:11:39.744 "data_size": 63488 00:11:39.744 } 00:11:39.744 ] 00:11:39.744 }' 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.744 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.310 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:40.310 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:40.310 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:40.310 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:40.310 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:40.310 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:40.310 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:40.310 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.310 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.310 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:40.310 [2024-11-15 09:30:28.604207] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:40.310 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.310 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:40.310 "name": "raid_bdev1", 00:11:40.310 "aliases": [ 00:11:40.310 "ad03fe53-08f6-441d-a032-5b57b23fdd76" 00:11:40.310 ], 00:11:40.310 "product_name": "Raid Volume", 00:11:40.310 "block_size": 512, 00:11:40.310 "num_blocks": 253952, 00:11:40.310 "uuid": "ad03fe53-08f6-441d-a032-5b57b23fdd76", 00:11:40.310 "assigned_rate_limits": { 00:11:40.310 "rw_ios_per_sec": 0, 00:11:40.310 "rw_mbytes_per_sec": 0, 00:11:40.310 "r_mbytes_per_sec": 0, 00:11:40.310 "w_mbytes_per_sec": 0 00:11:40.310 }, 00:11:40.310 "claimed": false, 00:11:40.310 "zoned": false, 00:11:40.310 "supported_io_types": { 00:11:40.310 "read": true, 00:11:40.310 "write": true, 00:11:40.310 "unmap": true, 00:11:40.310 "flush": true, 00:11:40.310 "reset": true, 00:11:40.310 "nvme_admin": false, 00:11:40.310 "nvme_io": false, 00:11:40.310 "nvme_io_md": false, 00:11:40.310 "write_zeroes": true, 00:11:40.310 "zcopy": false, 00:11:40.310 "get_zone_info": false, 00:11:40.310 "zone_management": false, 00:11:40.310 "zone_append": false, 00:11:40.310 "compare": false, 00:11:40.310 "compare_and_write": false, 00:11:40.310 "abort": false, 00:11:40.310 "seek_hole": false, 00:11:40.310 "seek_data": false, 00:11:40.310 "copy": false, 00:11:40.310 "nvme_iov_md": false 00:11:40.310 }, 00:11:40.310 "memory_domains": [ 00:11:40.310 { 00:11:40.310 "dma_device_id": "system", 00:11:40.310 "dma_device_type": 1 00:11:40.310 }, 00:11:40.310 { 00:11:40.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.310 "dma_device_type": 2 00:11:40.310 }, 00:11:40.310 { 00:11:40.310 "dma_device_id": "system", 00:11:40.310 "dma_device_type": 1 00:11:40.310 }, 00:11:40.310 { 00:11:40.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.310 "dma_device_type": 2 00:11:40.310 }, 00:11:40.310 { 00:11:40.310 "dma_device_id": "system", 00:11:40.310 "dma_device_type": 1 00:11:40.310 }, 00:11:40.310 { 00:11:40.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.310 "dma_device_type": 2 00:11:40.310 }, 00:11:40.310 { 00:11:40.310 "dma_device_id": "system", 00:11:40.310 "dma_device_type": 1 00:11:40.310 }, 00:11:40.310 { 00:11:40.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.310 "dma_device_type": 2 00:11:40.310 } 00:11:40.310 ], 00:11:40.310 "driver_specific": { 00:11:40.310 "raid": { 00:11:40.310 "uuid": "ad03fe53-08f6-441d-a032-5b57b23fdd76", 00:11:40.310 "strip_size_kb": 64, 00:11:40.310 "state": "online", 00:11:40.310 "raid_level": "concat", 00:11:40.310 "superblock": true, 00:11:40.310 "num_base_bdevs": 4, 00:11:40.310 "num_base_bdevs_discovered": 4, 00:11:40.310 "num_base_bdevs_operational": 4, 00:11:40.310 "base_bdevs_list": [ 00:11:40.310 { 00:11:40.310 "name": "pt1", 00:11:40.310 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:40.310 "is_configured": true, 00:11:40.310 "data_offset": 2048, 00:11:40.310 "data_size": 63488 00:11:40.310 }, 00:11:40.310 { 00:11:40.310 "name": "pt2", 00:11:40.310 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:40.310 "is_configured": true, 00:11:40.310 "data_offset": 2048, 00:11:40.310 "data_size": 63488 00:11:40.310 }, 00:11:40.310 { 00:11:40.310 "name": "pt3", 00:11:40.310 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:40.310 "is_configured": true, 00:11:40.310 "data_offset": 2048, 00:11:40.310 "data_size": 63488 00:11:40.310 }, 00:11:40.310 { 00:11:40.310 "name": "pt4", 00:11:40.311 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:40.311 "is_configured": true, 00:11:40.311 "data_offset": 2048, 00:11:40.311 "data_size": 63488 00:11:40.311 } 00:11:40.311 ] 00:11:40.311 } 00:11:40.311 } 00:11:40.311 }' 00:11:40.311 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:40.311 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:40.311 pt2 00:11:40.311 pt3 00:11:40.311 pt4' 00:11:40.311 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.311 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:40.311 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.311 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.311 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:40.311 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.311 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.311 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.311 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.311 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.311 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.311 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:40.311 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.311 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.311 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.569 [2024-11-15 09:30:28.907638] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ad03fe53-08f6-441d-a032-5b57b23fdd76 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ad03fe53-08f6-441d-a032-5b57b23fdd76 ']' 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.569 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.569 [2024-11-15 09:30:28.951165] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:40.569 [2024-11-15 09:30:28.951212] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:40.570 [2024-11-15 09:30:28.951352] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:40.570 [2024-11-15 09:30:28.951465] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:40.570 [2024-11-15 09:30:28.951492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:40.570 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.570 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.570 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.570 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.570 09:30:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:40.570 09:30:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.570 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:40.570 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:40.570 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:40.570 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:40.570 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.570 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.570 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.570 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:40.570 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:40.570 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.570 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.570 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.570 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:40.570 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:40.570 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.570 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.829 [2024-11-15 09:30:29.114948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:40.829 [2024-11-15 09:30:29.117455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:40.829 [2024-11-15 09:30:29.117522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:40.829 [2024-11-15 09:30:29.117564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:40.829 [2024-11-15 09:30:29.117627] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:40.829 [2024-11-15 09:30:29.117724] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:40.829 [2024-11-15 09:30:29.117758] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:40.829 [2024-11-15 09:30:29.117788] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:40.829 [2024-11-15 09:30:29.117809] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:40.829 [2024-11-15 09:30:29.117827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:40.829 request: 00:11:40.829 { 00:11:40.829 "name": "raid_bdev1", 00:11:40.829 "raid_level": "concat", 00:11:40.829 "base_bdevs": [ 00:11:40.829 "malloc1", 00:11:40.829 "malloc2", 00:11:40.829 "malloc3", 00:11:40.829 "malloc4" 00:11:40.829 ], 00:11:40.829 "strip_size_kb": 64, 00:11:40.829 "superblock": false, 00:11:40.829 "method": "bdev_raid_create", 00:11:40.829 "req_id": 1 00:11:40.829 } 00:11:40.829 Got JSON-RPC error response 00:11:40.829 response: 00:11:40.829 { 00:11:40.829 "code": -17, 00:11:40.829 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:40.829 } 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.829 [2024-11-15 09:30:29.174799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:40.829 [2024-11-15 09:30:29.174913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.829 [2024-11-15 09:30:29.174937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:40.829 [2024-11-15 09:30:29.174951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.829 [2024-11-15 09:30:29.177957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.829 [2024-11-15 09:30:29.178037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:40.829 [2024-11-15 09:30:29.178168] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:40.829 [2024-11-15 09:30:29.178268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:40.829 pt1 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.829 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.830 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.830 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.830 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.830 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.830 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.830 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.830 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.830 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.830 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.830 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.830 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.830 "name": "raid_bdev1", 00:11:40.830 "uuid": "ad03fe53-08f6-441d-a032-5b57b23fdd76", 00:11:40.830 "strip_size_kb": 64, 00:11:40.830 "state": "configuring", 00:11:40.830 "raid_level": "concat", 00:11:40.830 "superblock": true, 00:11:40.830 "num_base_bdevs": 4, 00:11:40.830 "num_base_bdevs_discovered": 1, 00:11:40.830 "num_base_bdevs_operational": 4, 00:11:40.830 "base_bdevs_list": [ 00:11:40.830 { 00:11:40.830 "name": "pt1", 00:11:40.830 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:40.830 "is_configured": true, 00:11:40.830 "data_offset": 2048, 00:11:40.830 "data_size": 63488 00:11:40.830 }, 00:11:40.830 { 00:11:40.830 "name": null, 00:11:40.830 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:40.830 "is_configured": false, 00:11:40.830 "data_offset": 2048, 00:11:40.830 "data_size": 63488 00:11:40.830 }, 00:11:40.830 { 00:11:40.830 "name": null, 00:11:40.830 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:40.830 "is_configured": false, 00:11:40.830 "data_offset": 2048, 00:11:40.830 "data_size": 63488 00:11:40.830 }, 00:11:40.830 { 00:11:40.830 "name": null, 00:11:40.830 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:40.830 "is_configured": false, 00:11:40.830 "data_offset": 2048, 00:11:40.830 "data_size": 63488 00:11:40.830 } 00:11:40.830 ] 00:11:40.830 }' 00:11:40.830 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.830 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.396 [2024-11-15 09:30:29.622059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:41.396 [2024-11-15 09:30:29.622160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.396 [2024-11-15 09:30:29.622185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:41.396 [2024-11-15 09:30:29.622200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.396 [2024-11-15 09:30:29.622790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.396 [2024-11-15 09:30:29.622825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:41.396 [2024-11-15 09:30:29.622950] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:41.396 [2024-11-15 09:30:29.623004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:41.396 pt2 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.396 [2024-11-15 09:30:29.630022] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.396 "name": "raid_bdev1", 00:11:41.396 "uuid": "ad03fe53-08f6-441d-a032-5b57b23fdd76", 00:11:41.396 "strip_size_kb": 64, 00:11:41.396 "state": "configuring", 00:11:41.396 "raid_level": "concat", 00:11:41.396 "superblock": true, 00:11:41.396 "num_base_bdevs": 4, 00:11:41.396 "num_base_bdevs_discovered": 1, 00:11:41.396 "num_base_bdevs_operational": 4, 00:11:41.396 "base_bdevs_list": [ 00:11:41.396 { 00:11:41.396 "name": "pt1", 00:11:41.396 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:41.396 "is_configured": true, 00:11:41.396 "data_offset": 2048, 00:11:41.396 "data_size": 63488 00:11:41.396 }, 00:11:41.396 { 00:11:41.396 "name": null, 00:11:41.396 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:41.396 "is_configured": false, 00:11:41.396 "data_offset": 0, 00:11:41.396 "data_size": 63488 00:11:41.396 }, 00:11:41.396 { 00:11:41.396 "name": null, 00:11:41.396 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:41.396 "is_configured": false, 00:11:41.396 "data_offset": 2048, 00:11:41.396 "data_size": 63488 00:11:41.396 }, 00:11:41.396 { 00:11:41.396 "name": null, 00:11:41.396 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:41.396 "is_configured": false, 00:11:41.396 "data_offset": 2048, 00:11:41.396 "data_size": 63488 00:11:41.396 } 00:11:41.396 ] 00:11:41.396 }' 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.396 09:30:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.655 [2024-11-15 09:30:30.075977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:41.655 [2024-11-15 09:30:30.076135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.655 [2024-11-15 09:30:30.076173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:41.655 [2024-11-15 09:30:30.076197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.655 [2024-11-15 09:30:30.077132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.655 [2024-11-15 09:30:30.077169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:41.655 [2024-11-15 09:30:30.077378] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:41.655 [2024-11-15 09:30:30.077441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:41.655 pt2 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.655 [2024-11-15 09:30:30.087745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:41.655 [2024-11-15 09:30:30.087816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.655 [2024-11-15 09:30:30.087875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:41.655 [2024-11-15 09:30:30.087890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.655 [2024-11-15 09:30:30.088476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.655 [2024-11-15 09:30:30.088506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:41.655 [2024-11-15 09:30:30.088616] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:41.655 [2024-11-15 09:30:30.088652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:41.655 pt3 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.655 [2024-11-15 09:30:30.099685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:41.655 [2024-11-15 09:30:30.099767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.655 [2024-11-15 09:30:30.099793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:41.655 [2024-11-15 09:30:30.099804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.655 [2024-11-15 09:30:30.100388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.655 [2024-11-15 09:30:30.100420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:41.655 [2024-11-15 09:30:30.100522] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:41.655 [2024-11-15 09:30:30.100559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:41.655 [2024-11-15 09:30:30.100808] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:41.655 [2024-11-15 09:30:30.100830] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:41.655 [2024-11-15 09:30:30.101158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:41.655 [2024-11-15 09:30:30.101361] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:41.655 [2024-11-15 09:30:30.101387] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:41.655 [2024-11-15 09:30:30.101603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.655 pt4 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.655 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.914 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.914 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.914 "name": "raid_bdev1", 00:11:41.914 "uuid": "ad03fe53-08f6-441d-a032-5b57b23fdd76", 00:11:41.914 "strip_size_kb": 64, 00:11:41.914 "state": "online", 00:11:41.914 "raid_level": "concat", 00:11:41.914 "superblock": true, 00:11:41.914 "num_base_bdevs": 4, 00:11:41.914 "num_base_bdevs_discovered": 4, 00:11:41.914 "num_base_bdevs_operational": 4, 00:11:41.914 "base_bdevs_list": [ 00:11:41.914 { 00:11:41.914 "name": "pt1", 00:11:41.914 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:41.914 "is_configured": true, 00:11:41.914 "data_offset": 2048, 00:11:41.914 "data_size": 63488 00:11:41.914 }, 00:11:41.914 { 00:11:41.914 "name": "pt2", 00:11:41.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:41.914 "is_configured": true, 00:11:41.914 "data_offset": 2048, 00:11:41.914 "data_size": 63488 00:11:41.914 }, 00:11:41.914 { 00:11:41.914 "name": "pt3", 00:11:41.914 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:41.914 "is_configured": true, 00:11:41.914 "data_offset": 2048, 00:11:41.914 "data_size": 63488 00:11:41.914 }, 00:11:41.914 { 00:11:41.914 "name": "pt4", 00:11:41.914 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:41.914 "is_configured": true, 00:11:41.914 "data_offset": 2048, 00:11:41.914 "data_size": 63488 00:11:41.914 } 00:11:41.914 ] 00:11:41.914 }' 00:11:41.914 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.914 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.172 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:42.172 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:42.172 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:42.172 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:42.172 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:42.172 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:42.172 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:42.172 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.172 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.172 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:42.172 [2024-11-15 09:30:30.543408] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:42.172 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.172 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:42.172 "name": "raid_bdev1", 00:11:42.172 "aliases": [ 00:11:42.172 "ad03fe53-08f6-441d-a032-5b57b23fdd76" 00:11:42.172 ], 00:11:42.172 "product_name": "Raid Volume", 00:11:42.172 "block_size": 512, 00:11:42.172 "num_blocks": 253952, 00:11:42.172 "uuid": "ad03fe53-08f6-441d-a032-5b57b23fdd76", 00:11:42.172 "assigned_rate_limits": { 00:11:42.172 "rw_ios_per_sec": 0, 00:11:42.172 "rw_mbytes_per_sec": 0, 00:11:42.172 "r_mbytes_per_sec": 0, 00:11:42.172 "w_mbytes_per_sec": 0 00:11:42.172 }, 00:11:42.172 "claimed": false, 00:11:42.172 "zoned": false, 00:11:42.172 "supported_io_types": { 00:11:42.172 "read": true, 00:11:42.172 "write": true, 00:11:42.172 "unmap": true, 00:11:42.172 "flush": true, 00:11:42.172 "reset": true, 00:11:42.172 "nvme_admin": false, 00:11:42.172 "nvme_io": false, 00:11:42.172 "nvme_io_md": false, 00:11:42.172 "write_zeroes": true, 00:11:42.172 "zcopy": false, 00:11:42.172 "get_zone_info": false, 00:11:42.172 "zone_management": false, 00:11:42.172 "zone_append": false, 00:11:42.172 "compare": false, 00:11:42.172 "compare_and_write": false, 00:11:42.172 "abort": false, 00:11:42.172 "seek_hole": false, 00:11:42.172 "seek_data": false, 00:11:42.172 "copy": false, 00:11:42.172 "nvme_iov_md": false 00:11:42.172 }, 00:11:42.172 "memory_domains": [ 00:11:42.172 { 00:11:42.172 "dma_device_id": "system", 00:11:42.172 "dma_device_type": 1 00:11:42.172 }, 00:11:42.172 { 00:11:42.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.172 "dma_device_type": 2 00:11:42.172 }, 00:11:42.172 { 00:11:42.172 "dma_device_id": "system", 00:11:42.172 "dma_device_type": 1 00:11:42.172 }, 00:11:42.172 { 00:11:42.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.172 "dma_device_type": 2 00:11:42.172 }, 00:11:42.172 { 00:11:42.172 "dma_device_id": "system", 00:11:42.172 "dma_device_type": 1 00:11:42.172 }, 00:11:42.172 { 00:11:42.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.172 "dma_device_type": 2 00:11:42.172 }, 00:11:42.172 { 00:11:42.172 "dma_device_id": "system", 00:11:42.172 "dma_device_type": 1 00:11:42.172 }, 00:11:42.172 { 00:11:42.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.172 "dma_device_type": 2 00:11:42.172 } 00:11:42.172 ], 00:11:42.172 "driver_specific": { 00:11:42.172 "raid": { 00:11:42.172 "uuid": "ad03fe53-08f6-441d-a032-5b57b23fdd76", 00:11:42.172 "strip_size_kb": 64, 00:11:42.172 "state": "online", 00:11:42.172 "raid_level": "concat", 00:11:42.172 "superblock": true, 00:11:42.172 "num_base_bdevs": 4, 00:11:42.172 "num_base_bdevs_discovered": 4, 00:11:42.172 "num_base_bdevs_operational": 4, 00:11:42.172 "base_bdevs_list": [ 00:11:42.172 { 00:11:42.172 "name": "pt1", 00:11:42.172 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:42.172 "is_configured": true, 00:11:42.173 "data_offset": 2048, 00:11:42.173 "data_size": 63488 00:11:42.173 }, 00:11:42.173 { 00:11:42.173 "name": "pt2", 00:11:42.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:42.173 "is_configured": true, 00:11:42.173 "data_offset": 2048, 00:11:42.173 "data_size": 63488 00:11:42.173 }, 00:11:42.173 { 00:11:42.173 "name": "pt3", 00:11:42.173 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:42.173 "is_configured": true, 00:11:42.173 "data_offset": 2048, 00:11:42.173 "data_size": 63488 00:11:42.173 }, 00:11:42.173 { 00:11:42.173 "name": "pt4", 00:11:42.173 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:42.173 "is_configured": true, 00:11:42.173 "data_offset": 2048, 00:11:42.173 "data_size": 63488 00:11:42.173 } 00:11:42.173 ] 00:11:42.173 } 00:11:42.173 } 00:11:42.173 }' 00:11:42.173 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:42.173 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:42.173 pt2 00:11:42.173 pt3 00:11:42.173 pt4' 00:11:42.173 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.431 [2024-11-15 09:30:30.858780] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ad03fe53-08f6-441d-a032-5b57b23fdd76 '!=' ad03fe53-08f6-441d-a032-5b57b23fdd76 ']' 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73004 00:11:42.431 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 73004 ']' 00:11:42.689 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 73004 00:11:42.689 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:11:42.689 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:42.689 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73004 00:11:42.689 killing process with pid 73004 00:11:42.689 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:42.689 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:42.689 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73004' 00:11:42.689 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 73004 00:11:42.689 09:30:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 73004 00:11:42.689 [2024-11-15 09:30:30.925950] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:42.689 [2024-11-15 09:30:30.926132] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.689 [2024-11-15 09:30:30.926261] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.689 [2024-11-15 09:30:30.926277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:43.256 [2024-11-15 09:30:31.415221] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:44.633 09:30:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:44.633 00:11:44.633 real 0m5.841s 00:11:44.633 user 0m8.182s 00:11:44.633 sys 0m0.993s 00:11:44.633 09:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:44.633 09:30:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.633 ************************************ 00:11:44.633 END TEST raid_superblock_test 00:11:44.633 ************************************ 00:11:44.633 09:30:32 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:44.633 09:30:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:44.633 09:30:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:44.633 09:30:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:44.633 ************************************ 00:11:44.633 START TEST raid_read_error_test 00:11:44.633 ************************************ 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 read 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.g58IXTssZb 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73268 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73268 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 73268 ']' 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:44.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:44.633 09:30:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.633 [2024-11-15 09:30:32.948297] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:11:44.633 [2024-11-15 09:30:32.948453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73268 ] 00:11:44.892 [2024-11-15 09:30:33.131665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.892 [2024-11-15 09:30:33.289250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.151 [2024-11-15 09:30:33.547646] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.151 [2024-11-15 09:30:33.547722] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.719 09:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:45.719 09:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:45.719 09:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.719 09:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:45.719 09:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.719 09:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.719 BaseBdev1_malloc 00:11:45.719 09:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.719 09:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:45.719 09:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.719 09:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.719 true 00:11:45.719 09:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.719 09:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:45.719 09:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.719 09:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.719 [2024-11-15 09:30:33.952892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:45.719 [2024-11-15 09:30:33.952969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.719 [2024-11-15 09:30:33.952993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:45.719 [2024-11-15 09:30:33.953006] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.719 [2024-11-15 09:30:33.955534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.719 [2024-11-15 09:30:33.955584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:45.719 BaseBdev1 00:11:45.719 09:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.719 09:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.720 09:30:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:45.720 09:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.720 09:30:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.720 BaseBdev2_malloc 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.720 true 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.720 [2024-11-15 09:30:34.026738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:45.720 [2024-11-15 09:30:34.026807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.720 [2024-11-15 09:30:34.026825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:45.720 [2024-11-15 09:30:34.026837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.720 [2024-11-15 09:30:34.029292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.720 [2024-11-15 09:30:34.029338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:45.720 BaseBdev2 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.720 BaseBdev3_malloc 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.720 true 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.720 [2024-11-15 09:30:34.108141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:45.720 [2024-11-15 09:30:34.108204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.720 [2024-11-15 09:30:34.108224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:45.720 [2024-11-15 09:30:34.108236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.720 [2024-11-15 09:30:34.110519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.720 [2024-11-15 09:30:34.110561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:45.720 BaseBdev3 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.720 BaseBdev4_malloc 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.720 true 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.720 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.720 [2024-11-15 09:30:34.183724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:45.720 [2024-11-15 09:30:34.183794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.720 [2024-11-15 09:30:34.183815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:45.720 [2024-11-15 09:30:34.183827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.980 [2024-11-15 09:30:34.186275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.980 [2024-11-15 09:30:34.186326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:45.980 BaseBdev4 00:11:45.980 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.980 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:45.980 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.980 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.980 [2024-11-15 09:30:34.195777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.980 [2024-11-15 09:30:34.197810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:45.980 [2024-11-15 09:30:34.197908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:45.980 [2024-11-15 09:30:34.197982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:45.980 [2024-11-15 09:30:34.198254] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:45.980 [2024-11-15 09:30:34.198278] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:45.980 [2024-11-15 09:30:34.198547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:45.980 [2024-11-15 09:30:34.198735] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:45.980 [2024-11-15 09:30:34.198755] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:45.980 [2024-11-15 09:30:34.198942] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.980 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.980 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:45.980 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.980 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.980 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.980 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.980 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.980 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.980 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.980 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.980 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.980 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.980 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.980 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.980 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.980 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.980 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.980 "name": "raid_bdev1", 00:11:45.980 "uuid": "853c4322-2cf2-4b8d-b023-1e7ed2f38342", 00:11:45.980 "strip_size_kb": 64, 00:11:45.980 "state": "online", 00:11:45.980 "raid_level": "concat", 00:11:45.980 "superblock": true, 00:11:45.980 "num_base_bdevs": 4, 00:11:45.980 "num_base_bdevs_discovered": 4, 00:11:45.980 "num_base_bdevs_operational": 4, 00:11:45.980 "base_bdevs_list": [ 00:11:45.980 { 00:11:45.980 "name": "BaseBdev1", 00:11:45.980 "uuid": "eceaadc8-c258-59b8-b876-d0490068e86b", 00:11:45.980 "is_configured": true, 00:11:45.980 "data_offset": 2048, 00:11:45.980 "data_size": 63488 00:11:45.980 }, 00:11:45.980 { 00:11:45.980 "name": "BaseBdev2", 00:11:45.980 "uuid": "f518c028-91b9-58f7-938e-3cce2a0630d5", 00:11:45.980 "is_configured": true, 00:11:45.980 "data_offset": 2048, 00:11:45.980 "data_size": 63488 00:11:45.980 }, 00:11:45.980 { 00:11:45.980 "name": "BaseBdev3", 00:11:45.980 "uuid": "0fb98588-b108-5971-980c-b0dfa2f61853", 00:11:45.980 "is_configured": true, 00:11:45.980 "data_offset": 2048, 00:11:45.980 "data_size": 63488 00:11:45.980 }, 00:11:45.980 { 00:11:45.980 "name": "BaseBdev4", 00:11:45.980 "uuid": "e97d7ce7-405a-56fa-bff4-7292928b35ab", 00:11:45.980 "is_configured": true, 00:11:45.980 "data_offset": 2048, 00:11:45.980 "data_size": 63488 00:11:45.980 } 00:11:45.980 ] 00:11:45.980 }' 00:11:45.980 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.980 09:30:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.238 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:46.238 09:30:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:46.498 [2024-11-15 09:30:34.744523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:47.436 09:30:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:47.436 09:30:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.436 09:30:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.436 09:30:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.436 09:30:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:47.436 09:30:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:47.436 09:30:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:47.436 09:30:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:47.436 09:30:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.436 09:30:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.436 09:30:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.436 09:30:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.436 09:30:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.436 09:30:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.436 09:30:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.436 09:30:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.436 09:30:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.436 09:30:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.436 09:30:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.436 09:30:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.436 09:30:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.436 09:30:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.436 09:30:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.436 "name": "raid_bdev1", 00:11:47.436 "uuid": "853c4322-2cf2-4b8d-b023-1e7ed2f38342", 00:11:47.436 "strip_size_kb": 64, 00:11:47.436 "state": "online", 00:11:47.436 "raid_level": "concat", 00:11:47.436 "superblock": true, 00:11:47.436 "num_base_bdevs": 4, 00:11:47.436 "num_base_bdevs_discovered": 4, 00:11:47.436 "num_base_bdevs_operational": 4, 00:11:47.436 "base_bdevs_list": [ 00:11:47.436 { 00:11:47.436 "name": "BaseBdev1", 00:11:47.436 "uuid": "eceaadc8-c258-59b8-b876-d0490068e86b", 00:11:47.436 "is_configured": true, 00:11:47.436 "data_offset": 2048, 00:11:47.436 "data_size": 63488 00:11:47.436 }, 00:11:47.436 { 00:11:47.436 "name": "BaseBdev2", 00:11:47.436 "uuid": "f518c028-91b9-58f7-938e-3cce2a0630d5", 00:11:47.436 "is_configured": true, 00:11:47.436 "data_offset": 2048, 00:11:47.436 "data_size": 63488 00:11:47.436 }, 00:11:47.436 { 00:11:47.436 "name": "BaseBdev3", 00:11:47.436 "uuid": "0fb98588-b108-5971-980c-b0dfa2f61853", 00:11:47.436 "is_configured": true, 00:11:47.436 "data_offset": 2048, 00:11:47.436 "data_size": 63488 00:11:47.436 }, 00:11:47.436 { 00:11:47.436 "name": "BaseBdev4", 00:11:47.436 "uuid": "e97d7ce7-405a-56fa-bff4-7292928b35ab", 00:11:47.436 "is_configured": true, 00:11:47.436 "data_offset": 2048, 00:11:47.436 "data_size": 63488 00:11:47.436 } 00:11:47.436 ] 00:11:47.436 }' 00:11:47.436 09:30:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.436 09:30:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.696 09:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:47.696 09:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.696 09:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.696 [2024-11-15 09:30:36.123812] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:47.696 [2024-11-15 09:30:36.123886] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:47.696 [2024-11-15 09:30:36.127161] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.696 [2024-11-15 09:30:36.127239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.696 [2024-11-15 09:30:36.127292] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.696 [2024-11-15 09:30:36.127309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:47.696 { 00:11:47.696 "results": [ 00:11:47.696 { 00:11:47.696 "job": "raid_bdev1", 00:11:47.696 "core_mask": "0x1", 00:11:47.696 "workload": "randrw", 00:11:47.696 "percentage": 50, 00:11:47.696 "status": "finished", 00:11:47.696 "queue_depth": 1, 00:11:47.696 "io_size": 131072, 00:11:47.696 "runtime": 1.379847, 00:11:47.696 "iops": 12091.195618064901, 00:11:47.696 "mibps": 1511.3994522581127, 00:11:47.696 "io_failed": 1, 00:11:47.696 "io_timeout": 0, 00:11:47.696 "avg_latency_us": 114.45236447767718, 00:11:47.696 "min_latency_us": 31.972052401746726, 00:11:47.696 "max_latency_us": 1652.709170305677 00:11:47.696 } 00:11:47.696 ], 00:11:47.696 "core_count": 1 00:11:47.696 } 00:11:47.696 09:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.696 09:30:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73268 00:11:47.696 09:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 73268 ']' 00:11:47.696 09:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 73268 00:11:47.696 09:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:11:47.696 09:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:47.696 09:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73268 00:11:47.956 09:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:47.956 09:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:47.956 killing process with pid 73268 00:11:47.956 09:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73268' 00:11:47.956 09:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 73268 00:11:47.956 [2024-11-15 09:30:36.171703] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:47.956 09:30:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 73268 00:11:48.216 [2024-11-15 09:30:36.567454] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:49.594 09:30:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.g58IXTssZb 00:11:49.594 09:30:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:49.594 09:30:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:49.594 09:30:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:49.594 09:30:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:49.594 09:30:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:49.594 09:30:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:49.594 09:30:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:49.594 00:11:49.594 real 0m5.225s 00:11:49.594 user 0m6.093s 00:11:49.594 sys 0m0.688s 00:11:49.594 09:30:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:49.594 09:30:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.594 ************************************ 00:11:49.594 END TEST raid_read_error_test 00:11:49.594 ************************************ 00:11:49.853 09:30:38 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:49.853 09:30:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:49.853 09:30:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:49.853 09:30:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:49.853 ************************************ 00:11:49.853 START TEST raid_write_error_test 00:11:49.853 ************************************ 00:11:49.853 09:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 write 00:11:49.853 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:49.853 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:49.853 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:49.853 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:49.853 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.853 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:49.853 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.853 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.853 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:49.853 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.853 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.853 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:49.853 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.853 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.853 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:49.853 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.853 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.853 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:49.853 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:49.853 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:49.854 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:49.854 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:49.854 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:49.854 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:49.854 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:49.854 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:49.854 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:49.854 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:49.854 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cd0nQMyMfg 00:11:49.854 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73420 00:11:49.854 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73420 00:11:49.854 09:30:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:49.854 09:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 73420 ']' 00:11:49.854 09:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.854 09:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:49.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.854 09:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.854 09:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:49.854 09:30:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.854 [2024-11-15 09:30:38.236772] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:11:49.854 [2024-11-15 09:30:38.236949] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73420 ] 00:11:50.139 [2024-11-15 09:30:38.403982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.139 [2024-11-15 09:30:38.538749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.398 [2024-11-15 09:30:38.773240] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.398 [2024-11-15 09:30:38.773313] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.965 BaseBdev1_malloc 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.965 true 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.965 [2024-11-15 09:30:39.233746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:50.965 [2024-11-15 09:30:39.233821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.965 [2024-11-15 09:30:39.233880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:50.965 [2024-11-15 09:30:39.233898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.965 [2024-11-15 09:30:39.236799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.965 [2024-11-15 09:30:39.236862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:50.965 BaseBdev1 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.965 BaseBdev2_malloc 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.965 true 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.965 [2024-11-15 09:30:39.310364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:50.965 [2024-11-15 09:30:39.310446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.965 [2024-11-15 09:30:39.310466] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:50.965 [2024-11-15 09:30:39.310478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.965 [2024-11-15 09:30:39.313063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.965 [2024-11-15 09:30:39.313109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:50.965 BaseBdev2 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:50.965 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.966 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.966 BaseBdev3_malloc 00:11:50.966 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.966 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:50.966 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.966 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.966 true 00:11:50.966 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.966 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:50.966 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.966 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.966 [2024-11-15 09:30:39.396744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:50.966 [2024-11-15 09:30:39.396811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.966 [2024-11-15 09:30:39.396833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:50.966 [2024-11-15 09:30:39.396857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.966 [2024-11-15 09:30:39.399675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.966 [2024-11-15 09:30:39.399721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:50.966 BaseBdev3 00:11:50.966 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.966 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.966 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:50.966 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.966 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.224 BaseBdev4_malloc 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.224 true 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.224 [2024-11-15 09:30:39.476363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:51.224 [2024-11-15 09:30:39.476429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.224 [2024-11-15 09:30:39.476451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:51.224 [2024-11-15 09:30:39.476465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.224 [2024-11-15 09:30:39.479094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.224 [2024-11-15 09:30:39.479140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:51.224 BaseBdev4 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.224 [2024-11-15 09:30:39.488419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.224 [2024-11-15 09:30:39.490689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.224 [2024-11-15 09:30:39.490775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:51.224 [2024-11-15 09:30:39.490860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:51.224 [2024-11-15 09:30:39.491120] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:51.224 [2024-11-15 09:30:39.491161] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:51.224 [2024-11-15 09:30:39.491460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:51.224 [2024-11-15 09:30:39.491678] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:51.224 [2024-11-15 09:30:39.491698] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:51.224 [2024-11-15 09:30:39.491913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.224 "name": "raid_bdev1", 00:11:51.224 "uuid": "0f869965-60c8-4b7c-938d-b793ac87532e", 00:11:51.224 "strip_size_kb": 64, 00:11:51.224 "state": "online", 00:11:51.224 "raid_level": "concat", 00:11:51.224 "superblock": true, 00:11:51.224 "num_base_bdevs": 4, 00:11:51.224 "num_base_bdevs_discovered": 4, 00:11:51.224 "num_base_bdevs_operational": 4, 00:11:51.224 "base_bdevs_list": [ 00:11:51.224 { 00:11:51.224 "name": "BaseBdev1", 00:11:51.224 "uuid": "d3a79cf1-7ba1-5a06-a06d-eec9e5ceb288", 00:11:51.224 "is_configured": true, 00:11:51.224 "data_offset": 2048, 00:11:51.224 "data_size": 63488 00:11:51.224 }, 00:11:51.224 { 00:11:51.224 "name": "BaseBdev2", 00:11:51.224 "uuid": "41151a32-d09e-57de-aeba-15f547d192a0", 00:11:51.224 "is_configured": true, 00:11:51.224 "data_offset": 2048, 00:11:51.224 "data_size": 63488 00:11:51.224 }, 00:11:51.224 { 00:11:51.224 "name": "BaseBdev3", 00:11:51.224 "uuid": "89afc62e-38c2-55a9-b9db-953dc5bbc38e", 00:11:51.224 "is_configured": true, 00:11:51.224 "data_offset": 2048, 00:11:51.224 "data_size": 63488 00:11:51.224 }, 00:11:51.224 { 00:11:51.224 "name": "BaseBdev4", 00:11:51.224 "uuid": "61c33f0e-21dd-5cc0-9d94-5c6730ee72e5", 00:11:51.224 "is_configured": true, 00:11:51.224 "data_offset": 2048, 00:11:51.224 "data_size": 63488 00:11:51.224 } 00:11:51.224 ] 00:11:51.224 }' 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.224 09:30:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.483 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:51.483 09:30:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:51.740 [2024-11-15 09:30:40.025117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:52.676 09:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:52.676 09:30:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.676 09:30:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.676 09:30:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.676 09:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:52.676 09:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:52.676 09:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:52.676 09:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:52.676 09:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.676 09:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.676 09:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:52.677 09:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.677 09:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.677 09:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.677 09:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.677 09:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.677 09:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.677 09:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.677 09:30:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.677 09:30:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.677 09:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.677 09:30:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.677 09:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.677 "name": "raid_bdev1", 00:11:52.677 "uuid": "0f869965-60c8-4b7c-938d-b793ac87532e", 00:11:52.677 "strip_size_kb": 64, 00:11:52.677 "state": "online", 00:11:52.677 "raid_level": "concat", 00:11:52.677 "superblock": true, 00:11:52.677 "num_base_bdevs": 4, 00:11:52.677 "num_base_bdevs_discovered": 4, 00:11:52.677 "num_base_bdevs_operational": 4, 00:11:52.677 "base_bdevs_list": [ 00:11:52.677 { 00:11:52.677 "name": "BaseBdev1", 00:11:52.677 "uuid": "d3a79cf1-7ba1-5a06-a06d-eec9e5ceb288", 00:11:52.677 "is_configured": true, 00:11:52.677 "data_offset": 2048, 00:11:52.677 "data_size": 63488 00:11:52.677 }, 00:11:52.677 { 00:11:52.677 "name": "BaseBdev2", 00:11:52.677 "uuid": "41151a32-d09e-57de-aeba-15f547d192a0", 00:11:52.677 "is_configured": true, 00:11:52.677 "data_offset": 2048, 00:11:52.677 "data_size": 63488 00:11:52.677 }, 00:11:52.677 { 00:11:52.677 "name": "BaseBdev3", 00:11:52.677 "uuid": "89afc62e-38c2-55a9-b9db-953dc5bbc38e", 00:11:52.677 "is_configured": true, 00:11:52.677 "data_offset": 2048, 00:11:52.677 "data_size": 63488 00:11:52.677 }, 00:11:52.677 { 00:11:52.677 "name": "BaseBdev4", 00:11:52.677 "uuid": "61c33f0e-21dd-5cc0-9d94-5c6730ee72e5", 00:11:52.677 "is_configured": true, 00:11:52.677 "data_offset": 2048, 00:11:52.677 "data_size": 63488 00:11:52.677 } 00:11:52.677 ] 00:11:52.677 }' 00:11:52.677 09:30:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.677 09:30:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.935 09:30:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:52.935 09:30:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.935 09:30:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.935 [2024-11-15 09:30:41.382594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:52.935 [2024-11-15 09:30:41.382641] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:52.935 [2024-11-15 09:30:41.385726] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:52.935 [2024-11-15 09:30:41.385803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.935 [2024-11-15 09:30:41.385869] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:52.935 [2024-11-15 09:30:41.385888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:52.935 { 00:11:52.935 "results": [ 00:11:52.935 { 00:11:52.935 "job": "raid_bdev1", 00:11:52.935 "core_mask": "0x1", 00:11:52.935 "workload": "randrw", 00:11:52.935 "percentage": 50, 00:11:52.935 "status": "finished", 00:11:52.935 "queue_depth": 1, 00:11:52.935 "io_size": 131072, 00:11:52.935 "runtime": 1.357731, 00:11:52.935 "iops": 12475.961733215196, 00:11:52.935 "mibps": 1559.4952166518995, 00:11:52.935 "io_failed": 1, 00:11:52.935 "io_timeout": 0, 00:11:52.935 "avg_latency_us": 112.9406528049164, 00:11:52.935 "min_latency_us": 28.05938864628821, 00:11:52.935 "max_latency_us": 1488.1537117903931 00:11:52.935 } 00:11:52.935 ], 00:11:52.935 "core_count": 1 00:11:52.935 } 00:11:52.935 09:30:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.935 09:30:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73420 00:11:52.935 09:30:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 73420 ']' 00:11:52.935 09:30:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 73420 00:11:52.935 09:30:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:11:52.935 09:30:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:52.935 09:30:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73420 00:11:53.193 09:30:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:53.193 09:30:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:53.193 killing process with pid 73420 00:11:53.193 09:30:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73420' 00:11:53.194 09:30:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 73420 00:11:53.194 [2024-11-15 09:30:41.418219] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:53.194 09:30:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 73420 00:11:53.453 [2024-11-15 09:30:41.809375] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:54.830 09:30:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cd0nQMyMfg 00:11:54.830 09:30:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:54.830 09:30:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:54.830 09:30:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:54.830 09:30:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:54.830 09:30:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:54.830 09:30:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:54.830 09:30:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:54.830 00:11:54.830 real 0m5.136s 00:11:54.830 user 0m5.964s 00:11:54.830 sys 0m0.675s 00:11:54.830 09:30:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:54.830 09:30:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.830 ************************************ 00:11:54.830 END TEST raid_write_error_test 00:11:54.830 ************************************ 00:11:55.089 09:30:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:55.090 09:30:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:55.090 09:30:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:55.090 09:30:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:55.090 09:30:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:55.090 ************************************ 00:11:55.090 START TEST raid_state_function_test 00:11:55.090 ************************************ 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 false 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73569 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:55.090 Process raid pid: 73569 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73569' 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73569 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 73569 ']' 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:55.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:55.090 09:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.090 [2024-11-15 09:30:43.441961] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:11:55.090 [2024-11-15 09:30:43.442167] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.349 [2024-11-15 09:30:43.635878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.607 [2024-11-15 09:30:43.829015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.866 [2024-11-15 09:30:44.099958] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.866 [2024-11-15 09:30:44.100010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.126 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:56.126 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:11:56.126 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:56.126 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.126 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.126 [2024-11-15 09:30:44.349743] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:56.126 [2024-11-15 09:30:44.349805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:56.126 [2024-11-15 09:30:44.349817] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:56.126 [2024-11-15 09:30:44.349828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:56.126 [2024-11-15 09:30:44.349834] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:56.126 [2024-11-15 09:30:44.349844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:56.126 [2024-11-15 09:30:44.349863] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:56.126 [2024-11-15 09:30:44.349873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:56.126 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.126 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:56.126 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.126 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.126 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.126 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.126 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.126 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.126 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.126 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.126 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.126 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.126 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.126 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.126 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.126 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.126 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.126 "name": "Existed_Raid", 00:11:56.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.126 "strip_size_kb": 0, 00:11:56.126 "state": "configuring", 00:11:56.126 "raid_level": "raid1", 00:11:56.126 "superblock": false, 00:11:56.126 "num_base_bdevs": 4, 00:11:56.126 "num_base_bdevs_discovered": 0, 00:11:56.126 "num_base_bdevs_operational": 4, 00:11:56.126 "base_bdevs_list": [ 00:11:56.126 { 00:11:56.126 "name": "BaseBdev1", 00:11:56.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.126 "is_configured": false, 00:11:56.126 "data_offset": 0, 00:11:56.126 "data_size": 0 00:11:56.126 }, 00:11:56.126 { 00:11:56.126 "name": "BaseBdev2", 00:11:56.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.126 "is_configured": false, 00:11:56.126 "data_offset": 0, 00:11:56.126 "data_size": 0 00:11:56.126 }, 00:11:56.126 { 00:11:56.126 "name": "BaseBdev3", 00:11:56.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.126 "is_configured": false, 00:11:56.126 "data_offset": 0, 00:11:56.126 "data_size": 0 00:11:56.126 }, 00:11:56.126 { 00:11:56.126 "name": "BaseBdev4", 00:11:56.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.126 "is_configured": false, 00:11:56.126 "data_offset": 0, 00:11:56.126 "data_size": 0 00:11:56.126 } 00:11:56.126 ] 00:11:56.126 }' 00:11:56.126 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.126 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.386 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:56.386 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.386 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.386 [2024-11-15 09:30:44.816971] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:56.386 [2024-11-15 09:30:44.817025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:56.386 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.386 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:56.386 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.386 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.386 [2024-11-15 09:30:44.828906] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:56.386 [2024-11-15 09:30:44.828956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:56.386 [2024-11-15 09:30:44.828968] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:56.386 [2024-11-15 09:30:44.828979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:56.386 [2024-11-15 09:30:44.828987] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:56.386 [2024-11-15 09:30:44.828997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:56.386 [2024-11-15 09:30:44.829005] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:56.386 [2024-11-15 09:30:44.829015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:56.386 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.386 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:56.386 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.386 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.646 [2024-11-15 09:30:44.890632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:56.646 BaseBdev1 00:11:56.646 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.646 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:56.646 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:56.646 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:56.646 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:56.646 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:56.646 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:56.646 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:56.646 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.646 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.646 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.646 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:56.646 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.646 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.646 [ 00:11:56.646 { 00:11:56.646 "name": "BaseBdev1", 00:11:56.646 "aliases": [ 00:11:56.646 "92dc59be-8ff2-4a3f-9e55-817ecd0f97f8" 00:11:56.646 ], 00:11:56.646 "product_name": "Malloc disk", 00:11:56.646 "block_size": 512, 00:11:56.646 "num_blocks": 65536, 00:11:56.646 "uuid": "92dc59be-8ff2-4a3f-9e55-817ecd0f97f8", 00:11:56.646 "assigned_rate_limits": { 00:11:56.646 "rw_ios_per_sec": 0, 00:11:56.646 "rw_mbytes_per_sec": 0, 00:11:56.646 "r_mbytes_per_sec": 0, 00:11:56.646 "w_mbytes_per_sec": 0 00:11:56.646 }, 00:11:56.646 "claimed": true, 00:11:56.646 "claim_type": "exclusive_write", 00:11:56.646 "zoned": false, 00:11:56.646 "supported_io_types": { 00:11:56.646 "read": true, 00:11:56.646 "write": true, 00:11:56.646 "unmap": true, 00:11:56.646 "flush": true, 00:11:56.646 "reset": true, 00:11:56.646 "nvme_admin": false, 00:11:56.646 "nvme_io": false, 00:11:56.646 "nvme_io_md": false, 00:11:56.646 "write_zeroes": true, 00:11:56.647 "zcopy": true, 00:11:56.647 "get_zone_info": false, 00:11:56.647 "zone_management": false, 00:11:56.647 "zone_append": false, 00:11:56.647 "compare": false, 00:11:56.647 "compare_and_write": false, 00:11:56.647 "abort": true, 00:11:56.647 "seek_hole": false, 00:11:56.647 "seek_data": false, 00:11:56.647 "copy": true, 00:11:56.647 "nvme_iov_md": false 00:11:56.647 }, 00:11:56.647 "memory_domains": [ 00:11:56.647 { 00:11:56.647 "dma_device_id": "system", 00:11:56.647 "dma_device_type": 1 00:11:56.647 }, 00:11:56.647 { 00:11:56.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.647 "dma_device_type": 2 00:11:56.647 } 00:11:56.647 ], 00:11:56.647 "driver_specific": {} 00:11:56.647 } 00:11:56.647 ] 00:11:56.647 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.647 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:56.647 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:56.647 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.647 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.647 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.647 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.647 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.647 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.647 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.647 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.647 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.647 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.647 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.647 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.647 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.647 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.647 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.647 "name": "Existed_Raid", 00:11:56.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.647 "strip_size_kb": 0, 00:11:56.647 "state": "configuring", 00:11:56.647 "raid_level": "raid1", 00:11:56.647 "superblock": false, 00:11:56.647 "num_base_bdevs": 4, 00:11:56.647 "num_base_bdevs_discovered": 1, 00:11:56.647 "num_base_bdevs_operational": 4, 00:11:56.647 "base_bdevs_list": [ 00:11:56.647 { 00:11:56.647 "name": "BaseBdev1", 00:11:56.647 "uuid": "92dc59be-8ff2-4a3f-9e55-817ecd0f97f8", 00:11:56.647 "is_configured": true, 00:11:56.647 "data_offset": 0, 00:11:56.647 "data_size": 65536 00:11:56.647 }, 00:11:56.647 { 00:11:56.647 "name": "BaseBdev2", 00:11:56.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.647 "is_configured": false, 00:11:56.647 "data_offset": 0, 00:11:56.647 "data_size": 0 00:11:56.647 }, 00:11:56.647 { 00:11:56.647 "name": "BaseBdev3", 00:11:56.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.647 "is_configured": false, 00:11:56.647 "data_offset": 0, 00:11:56.647 "data_size": 0 00:11:56.647 }, 00:11:56.647 { 00:11:56.647 "name": "BaseBdev4", 00:11:56.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.647 "is_configured": false, 00:11:56.647 "data_offset": 0, 00:11:56.647 "data_size": 0 00:11:56.647 } 00:11:56.647 ] 00:11:56.647 }' 00:11:56.647 09:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.647 09:30:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.906 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:56.906 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.906 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.906 [2024-11-15 09:30:45.369881] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:56.906 [2024-11-15 09:30:45.369953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:57.165 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.165 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:57.165 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.165 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.165 [2024-11-15 09:30:45.381904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.165 [2024-11-15 09:30:45.384104] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:57.165 [2024-11-15 09:30:45.384151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:57.165 [2024-11-15 09:30:45.384163] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:57.165 [2024-11-15 09:30:45.384174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:57.165 [2024-11-15 09:30:45.384181] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:57.165 [2024-11-15 09:30:45.384190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:57.165 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.165 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:57.165 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:57.165 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:57.165 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.165 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.165 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.165 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.165 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.165 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.165 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.165 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.165 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.165 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.165 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.165 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.165 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.165 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.165 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.165 "name": "Existed_Raid", 00:11:57.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.165 "strip_size_kb": 0, 00:11:57.165 "state": "configuring", 00:11:57.165 "raid_level": "raid1", 00:11:57.165 "superblock": false, 00:11:57.165 "num_base_bdevs": 4, 00:11:57.165 "num_base_bdevs_discovered": 1, 00:11:57.165 "num_base_bdevs_operational": 4, 00:11:57.165 "base_bdevs_list": [ 00:11:57.165 { 00:11:57.165 "name": "BaseBdev1", 00:11:57.165 "uuid": "92dc59be-8ff2-4a3f-9e55-817ecd0f97f8", 00:11:57.165 "is_configured": true, 00:11:57.165 "data_offset": 0, 00:11:57.166 "data_size": 65536 00:11:57.166 }, 00:11:57.166 { 00:11:57.166 "name": "BaseBdev2", 00:11:57.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.166 "is_configured": false, 00:11:57.166 "data_offset": 0, 00:11:57.166 "data_size": 0 00:11:57.166 }, 00:11:57.166 { 00:11:57.166 "name": "BaseBdev3", 00:11:57.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.166 "is_configured": false, 00:11:57.166 "data_offset": 0, 00:11:57.166 "data_size": 0 00:11:57.166 }, 00:11:57.166 { 00:11:57.166 "name": "BaseBdev4", 00:11:57.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.166 "is_configured": false, 00:11:57.166 "data_offset": 0, 00:11:57.166 "data_size": 0 00:11:57.166 } 00:11:57.166 ] 00:11:57.166 }' 00:11:57.166 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.166 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.424 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:57.424 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.424 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.424 [2024-11-15 09:30:45.880272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.424 BaseBdev2 00:11:57.424 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.424 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:57.424 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:57.424 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:57.424 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:57.424 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:57.424 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:57.424 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:57.424 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.424 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.684 [ 00:11:57.684 { 00:11:57.684 "name": "BaseBdev2", 00:11:57.684 "aliases": [ 00:11:57.684 "2aa64f18-b8c2-4842-b7f5-c90bdca2cfb5" 00:11:57.684 ], 00:11:57.684 "product_name": "Malloc disk", 00:11:57.684 "block_size": 512, 00:11:57.684 "num_blocks": 65536, 00:11:57.684 "uuid": "2aa64f18-b8c2-4842-b7f5-c90bdca2cfb5", 00:11:57.684 "assigned_rate_limits": { 00:11:57.684 "rw_ios_per_sec": 0, 00:11:57.684 "rw_mbytes_per_sec": 0, 00:11:57.684 "r_mbytes_per_sec": 0, 00:11:57.684 "w_mbytes_per_sec": 0 00:11:57.684 }, 00:11:57.684 "claimed": true, 00:11:57.684 "claim_type": "exclusive_write", 00:11:57.684 "zoned": false, 00:11:57.684 "supported_io_types": { 00:11:57.684 "read": true, 00:11:57.684 "write": true, 00:11:57.684 "unmap": true, 00:11:57.684 "flush": true, 00:11:57.684 "reset": true, 00:11:57.684 "nvme_admin": false, 00:11:57.684 "nvme_io": false, 00:11:57.684 "nvme_io_md": false, 00:11:57.684 "write_zeroes": true, 00:11:57.684 "zcopy": true, 00:11:57.684 "get_zone_info": false, 00:11:57.684 "zone_management": false, 00:11:57.684 "zone_append": false, 00:11:57.684 "compare": false, 00:11:57.684 "compare_and_write": false, 00:11:57.684 "abort": true, 00:11:57.684 "seek_hole": false, 00:11:57.684 "seek_data": false, 00:11:57.684 "copy": true, 00:11:57.684 "nvme_iov_md": false 00:11:57.684 }, 00:11:57.684 "memory_domains": [ 00:11:57.684 { 00:11:57.684 "dma_device_id": "system", 00:11:57.684 "dma_device_type": 1 00:11:57.684 }, 00:11:57.684 { 00:11:57.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.684 "dma_device_type": 2 00:11:57.684 } 00:11:57.684 ], 00:11:57.684 "driver_specific": {} 00:11:57.684 } 00:11:57.684 ] 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.684 "name": "Existed_Raid", 00:11:57.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.684 "strip_size_kb": 0, 00:11:57.684 "state": "configuring", 00:11:57.684 "raid_level": "raid1", 00:11:57.684 "superblock": false, 00:11:57.684 "num_base_bdevs": 4, 00:11:57.684 "num_base_bdevs_discovered": 2, 00:11:57.684 "num_base_bdevs_operational": 4, 00:11:57.684 "base_bdevs_list": [ 00:11:57.684 { 00:11:57.684 "name": "BaseBdev1", 00:11:57.684 "uuid": "92dc59be-8ff2-4a3f-9e55-817ecd0f97f8", 00:11:57.684 "is_configured": true, 00:11:57.684 "data_offset": 0, 00:11:57.684 "data_size": 65536 00:11:57.684 }, 00:11:57.684 { 00:11:57.684 "name": "BaseBdev2", 00:11:57.684 "uuid": "2aa64f18-b8c2-4842-b7f5-c90bdca2cfb5", 00:11:57.684 "is_configured": true, 00:11:57.684 "data_offset": 0, 00:11:57.684 "data_size": 65536 00:11:57.684 }, 00:11:57.684 { 00:11:57.684 "name": "BaseBdev3", 00:11:57.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.684 "is_configured": false, 00:11:57.684 "data_offset": 0, 00:11:57.684 "data_size": 0 00:11:57.684 }, 00:11:57.684 { 00:11:57.684 "name": "BaseBdev4", 00:11:57.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.684 "is_configured": false, 00:11:57.684 "data_offset": 0, 00:11:57.684 "data_size": 0 00:11:57.684 } 00:11:57.684 ] 00:11:57.684 }' 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.684 09:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.944 [2024-11-15 09:30:46.361242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:57.944 BaseBdev3 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.944 [ 00:11:57.944 { 00:11:57.944 "name": "BaseBdev3", 00:11:57.944 "aliases": [ 00:11:57.944 "9624e965-8a1d-4529-b0de-2c0b12a004ba" 00:11:57.944 ], 00:11:57.944 "product_name": "Malloc disk", 00:11:57.944 "block_size": 512, 00:11:57.944 "num_blocks": 65536, 00:11:57.944 "uuid": "9624e965-8a1d-4529-b0de-2c0b12a004ba", 00:11:57.944 "assigned_rate_limits": { 00:11:57.944 "rw_ios_per_sec": 0, 00:11:57.944 "rw_mbytes_per_sec": 0, 00:11:57.944 "r_mbytes_per_sec": 0, 00:11:57.944 "w_mbytes_per_sec": 0 00:11:57.944 }, 00:11:57.944 "claimed": true, 00:11:57.944 "claim_type": "exclusive_write", 00:11:57.944 "zoned": false, 00:11:57.944 "supported_io_types": { 00:11:57.944 "read": true, 00:11:57.944 "write": true, 00:11:57.944 "unmap": true, 00:11:57.944 "flush": true, 00:11:57.944 "reset": true, 00:11:57.944 "nvme_admin": false, 00:11:57.944 "nvme_io": false, 00:11:57.944 "nvme_io_md": false, 00:11:57.944 "write_zeroes": true, 00:11:57.944 "zcopy": true, 00:11:57.944 "get_zone_info": false, 00:11:57.944 "zone_management": false, 00:11:57.944 "zone_append": false, 00:11:57.944 "compare": false, 00:11:57.944 "compare_and_write": false, 00:11:57.944 "abort": true, 00:11:57.944 "seek_hole": false, 00:11:57.944 "seek_data": false, 00:11:57.944 "copy": true, 00:11:57.944 "nvme_iov_md": false 00:11:57.944 }, 00:11:57.944 "memory_domains": [ 00:11:57.944 { 00:11:57.944 "dma_device_id": "system", 00:11:57.944 "dma_device_type": 1 00:11:57.944 }, 00:11:57.944 { 00:11:57.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.944 "dma_device_type": 2 00:11:57.944 } 00:11:57.944 ], 00:11:57.944 "driver_specific": {} 00:11:57.944 } 00:11:57.944 ] 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.944 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.203 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.203 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.203 "name": "Existed_Raid", 00:11:58.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.203 "strip_size_kb": 0, 00:11:58.203 "state": "configuring", 00:11:58.203 "raid_level": "raid1", 00:11:58.203 "superblock": false, 00:11:58.203 "num_base_bdevs": 4, 00:11:58.203 "num_base_bdevs_discovered": 3, 00:11:58.203 "num_base_bdevs_operational": 4, 00:11:58.203 "base_bdevs_list": [ 00:11:58.203 { 00:11:58.203 "name": "BaseBdev1", 00:11:58.203 "uuid": "92dc59be-8ff2-4a3f-9e55-817ecd0f97f8", 00:11:58.203 "is_configured": true, 00:11:58.203 "data_offset": 0, 00:11:58.203 "data_size": 65536 00:11:58.203 }, 00:11:58.203 { 00:11:58.203 "name": "BaseBdev2", 00:11:58.203 "uuid": "2aa64f18-b8c2-4842-b7f5-c90bdca2cfb5", 00:11:58.203 "is_configured": true, 00:11:58.203 "data_offset": 0, 00:11:58.203 "data_size": 65536 00:11:58.203 }, 00:11:58.203 { 00:11:58.203 "name": "BaseBdev3", 00:11:58.203 "uuid": "9624e965-8a1d-4529-b0de-2c0b12a004ba", 00:11:58.203 "is_configured": true, 00:11:58.203 "data_offset": 0, 00:11:58.203 "data_size": 65536 00:11:58.203 }, 00:11:58.203 { 00:11:58.203 "name": "BaseBdev4", 00:11:58.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.203 "is_configured": false, 00:11:58.203 "data_offset": 0, 00:11:58.203 "data_size": 0 00:11:58.203 } 00:11:58.203 ] 00:11:58.203 }' 00:11:58.203 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.203 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.462 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:58.462 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.462 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.722 [2024-11-15 09:30:46.938517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:58.722 [2024-11-15 09:30:46.938587] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:58.722 [2024-11-15 09:30:46.938599] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:58.722 [2024-11-15 09:30:46.938953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:58.722 [2024-11-15 09:30:46.939175] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:58.722 [2024-11-15 09:30:46.939200] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:58.722 [2024-11-15 09:30:46.939552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.722 BaseBdev4 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.722 [ 00:11:58.722 { 00:11:58.722 "name": "BaseBdev4", 00:11:58.722 "aliases": [ 00:11:58.722 "1b33ad06-cabf-4ebf-9c1b-dd25c2a38dc8" 00:11:58.722 ], 00:11:58.722 "product_name": "Malloc disk", 00:11:58.722 "block_size": 512, 00:11:58.722 "num_blocks": 65536, 00:11:58.722 "uuid": "1b33ad06-cabf-4ebf-9c1b-dd25c2a38dc8", 00:11:58.722 "assigned_rate_limits": { 00:11:58.722 "rw_ios_per_sec": 0, 00:11:58.722 "rw_mbytes_per_sec": 0, 00:11:58.722 "r_mbytes_per_sec": 0, 00:11:58.722 "w_mbytes_per_sec": 0 00:11:58.722 }, 00:11:58.722 "claimed": true, 00:11:58.722 "claim_type": "exclusive_write", 00:11:58.722 "zoned": false, 00:11:58.722 "supported_io_types": { 00:11:58.722 "read": true, 00:11:58.722 "write": true, 00:11:58.722 "unmap": true, 00:11:58.722 "flush": true, 00:11:58.722 "reset": true, 00:11:58.722 "nvme_admin": false, 00:11:58.722 "nvme_io": false, 00:11:58.722 "nvme_io_md": false, 00:11:58.722 "write_zeroes": true, 00:11:58.722 "zcopy": true, 00:11:58.722 "get_zone_info": false, 00:11:58.722 "zone_management": false, 00:11:58.722 "zone_append": false, 00:11:58.722 "compare": false, 00:11:58.722 "compare_and_write": false, 00:11:58.722 "abort": true, 00:11:58.722 "seek_hole": false, 00:11:58.722 "seek_data": false, 00:11:58.722 "copy": true, 00:11:58.722 "nvme_iov_md": false 00:11:58.722 }, 00:11:58.722 "memory_domains": [ 00:11:58.722 { 00:11:58.722 "dma_device_id": "system", 00:11:58.722 "dma_device_type": 1 00:11:58.722 }, 00:11:58.722 { 00:11:58.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.722 "dma_device_type": 2 00:11:58.722 } 00:11:58.722 ], 00:11:58.722 "driver_specific": {} 00:11:58.722 } 00:11:58.722 ] 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.722 09:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.722 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.722 "name": "Existed_Raid", 00:11:58.722 "uuid": "007f2633-d735-4619-9111-f0f73eb25799", 00:11:58.722 "strip_size_kb": 0, 00:11:58.722 "state": "online", 00:11:58.722 "raid_level": "raid1", 00:11:58.722 "superblock": false, 00:11:58.722 "num_base_bdevs": 4, 00:11:58.722 "num_base_bdevs_discovered": 4, 00:11:58.722 "num_base_bdevs_operational": 4, 00:11:58.722 "base_bdevs_list": [ 00:11:58.722 { 00:11:58.722 "name": "BaseBdev1", 00:11:58.722 "uuid": "92dc59be-8ff2-4a3f-9e55-817ecd0f97f8", 00:11:58.722 "is_configured": true, 00:11:58.722 "data_offset": 0, 00:11:58.722 "data_size": 65536 00:11:58.722 }, 00:11:58.722 { 00:11:58.722 "name": "BaseBdev2", 00:11:58.722 "uuid": "2aa64f18-b8c2-4842-b7f5-c90bdca2cfb5", 00:11:58.722 "is_configured": true, 00:11:58.722 "data_offset": 0, 00:11:58.722 "data_size": 65536 00:11:58.722 }, 00:11:58.722 { 00:11:58.722 "name": "BaseBdev3", 00:11:58.722 "uuid": "9624e965-8a1d-4529-b0de-2c0b12a004ba", 00:11:58.722 "is_configured": true, 00:11:58.722 "data_offset": 0, 00:11:58.722 "data_size": 65536 00:11:58.722 }, 00:11:58.722 { 00:11:58.722 "name": "BaseBdev4", 00:11:58.722 "uuid": "1b33ad06-cabf-4ebf-9c1b-dd25c2a38dc8", 00:11:58.722 "is_configured": true, 00:11:58.722 "data_offset": 0, 00:11:58.722 "data_size": 65536 00:11:58.722 } 00:11:58.722 ] 00:11:58.722 }' 00:11:58.722 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.722 09:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.982 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:58.982 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:58.982 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:58.982 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:58.982 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:58.982 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:58.982 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:58.982 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:58.982 09:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.982 09:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.982 [2024-11-15 09:30:47.414220] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:58.982 09:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.982 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:58.982 "name": "Existed_Raid", 00:11:58.982 "aliases": [ 00:11:58.982 "007f2633-d735-4619-9111-f0f73eb25799" 00:11:58.982 ], 00:11:58.982 "product_name": "Raid Volume", 00:11:58.982 "block_size": 512, 00:11:58.982 "num_blocks": 65536, 00:11:58.982 "uuid": "007f2633-d735-4619-9111-f0f73eb25799", 00:11:58.982 "assigned_rate_limits": { 00:11:58.982 "rw_ios_per_sec": 0, 00:11:58.982 "rw_mbytes_per_sec": 0, 00:11:58.982 "r_mbytes_per_sec": 0, 00:11:58.982 "w_mbytes_per_sec": 0 00:11:58.982 }, 00:11:58.982 "claimed": false, 00:11:58.982 "zoned": false, 00:11:58.982 "supported_io_types": { 00:11:58.982 "read": true, 00:11:58.982 "write": true, 00:11:58.982 "unmap": false, 00:11:58.982 "flush": false, 00:11:58.982 "reset": true, 00:11:58.982 "nvme_admin": false, 00:11:58.982 "nvme_io": false, 00:11:58.982 "nvme_io_md": false, 00:11:58.982 "write_zeroes": true, 00:11:58.982 "zcopy": false, 00:11:58.982 "get_zone_info": false, 00:11:58.982 "zone_management": false, 00:11:58.982 "zone_append": false, 00:11:58.982 "compare": false, 00:11:58.982 "compare_and_write": false, 00:11:58.982 "abort": false, 00:11:58.983 "seek_hole": false, 00:11:58.983 "seek_data": false, 00:11:58.983 "copy": false, 00:11:58.983 "nvme_iov_md": false 00:11:58.983 }, 00:11:58.983 "memory_domains": [ 00:11:58.983 { 00:11:58.983 "dma_device_id": "system", 00:11:58.983 "dma_device_type": 1 00:11:58.983 }, 00:11:58.983 { 00:11:58.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.983 "dma_device_type": 2 00:11:58.983 }, 00:11:58.983 { 00:11:58.983 "dma_device_id": "system", 00:11:58.983 "dma_device_type": 1 00:11:58.983 }, 00:11:58.983 { 00:11:58.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.983 "dma_device_type": 2 00:11:58.983 }, 00:11:58.983 { 00:11:58.983 "dma_device_id": "system", 00:11:58.983 "dma_device_type": 1 00:11:58.983 }, 00:11:58.983 { 00:11:58.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.983 "dma_device_type": 2 00:11:58.983 }, 00:11:58.983 { 00:11:58.983 "dma_device_id": "system", 00:11:58.983 "dma_device_type": 1 00:11:58.983 }, 00:11:58.983 { 00:11:58.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.983 "dma_device_type": 2 00:11:58.983 } 00:11:58.983 ], 00:11:58.983 "driver_specific": { 00:11:58.983 "raid": { 00:11:58.983 "uuid": "007f2633-d735-4619-9111-f0f73eb25799", 00:11:58.983 "strip_size_kb": 0, 00:11:58.983 "state": "online", 00:11:58.983 "raid_level": "raid1", 00:11:58.983 "superblock": false, 00:11:58.983 "num_base_bdevs": 4, 00:11:58.983 "num_base_bdevs_discovered": 4, 00:11:58.983 "num_base_bdevs_operational": 4, 00:11:58.983 "base_bdevs_list": [ 00:11:58.983 { 00:11:58.983 "name": "BaseBdev1", 00:11:58.983 "uuid": "92dc59be-8ff2-4a3f-9e55-817ecd0f97f8", 00:11:58.983 "is_configured": true, 00:11:58.983 "data_offset": 0, 00:11:58.983 "data_size": 65536 00:11:58.983 }, 00:11:58.983 { 00:11:58.983 "name": "BaseBdev2", 00:11:58.983 "uuid": "2aa64f18-b8c2-4842-b7f5-c90bdca2cfb5", 00:11:58.983 "is_configured": true, 00:11:58.983 "data_offset": 0, 00:11:58.983 "data_size": 65536 00:11:58.983 }, 00:11:58.983 { 00:11:58.983 "name": "BaseBdev3", 00:11:58.983 "uuid": "9624e965-8a1d-4529-b0de-2c0b12a004ba", 00:11:58.983 "is_configured": true, 00:11:58.983 "data_offset": 0, 00:11:58.983 "data_size": 65536 00:11:58.983 }, 00:11:58.983 { 00:11:58.983 "name": "BaseBdev4", 00:11:58.983 "uuid": "1b33ad06-cabf-4ebf-9c1b-dd25c2a38dc8", 00:11:58.983 "is_configured": true, 00:11:58.983 "data_offset": 0, 00:11:58.983 "data_size": 65536 00:11:58.983 } 00:11:58.983 ] 00:11:58.983 } 00:11:58.983 } 00:11:58.983 }' 00:11:58.983 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:59.242 BaseBdev2 00:11:59.242 BaseBdev3 00:11:59.242 BaseBdev4' 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.242 09:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.505 [2024-11-15 09:30:47.717338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.505 "name": "Existed_Raid", 00:11:59.505 "uuid": "007f2633-d735-4619-9111-f0f73eb25799", 00:11:59.505 "strip_size_kb": 0, 00:11:59.505 "state": "online", 00:11:59.505 "raid_level": "raid1", 00:11:59.505 "superblock": false, 00:11:59.505 "num_base_bdevs": 4, 00:11:59.505 "num_base_bdevs_discovered": 3, 00:11:59.505 "num_base_bdevs_operational": 3, 00:11:59.505 "base_bdevs_list": [ 00:11:59.505 { 00:11:59.505 "name": null, 00:11:59.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.505 "is_configured": false, 00:11:59.505 "data_offset": 0, 00:11:59.505 "data_size": 65536 00:11:59.505 }, 00:11:59.505 { 00:11:59.505 "name": "BaseBdev2", 00:11:59.505 "uuid": "2aa64f18-b8c2-4842-b7f5-c90bdca2cfb5", 00:11:59.505 "is_configured": true, 00:11:59.505 "data_offset": 0, 00:11:59.505 "data_size": 65536 00:11:59.505 }, 00:11:59.505 { 00:11:59.505 "name": "BaseBdev3", 00:11:59.505 "uuid": "9624e965-8a1d-4529-b0de-2c0b12a004ba", 00:11:59.505 "is_configured": true, 00:11:59.505 "data_offset": 0, 00:11:59.505 "data_size": 65536 00:11:59.505 }, 00:11:59.505 { 00:11:59.505 "name": "BaseBdev4", 00:11:59.505 "uuid": "1b33ad06-cabf-4ebf-9c1b-dd25c2a38dc8", 00:11:59.505 "is_configured": true, 00:11:59.505 "data_offset": 0, 00:11:59.505 "data_size": 65536 00:11:59.505 } 00:11:59.505 ] 00:11:59.505 }' 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.505 09:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.091 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:00.091 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:00.091 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.091 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:00.091 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.091 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.091 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.091 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:00.091 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:00.091 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:00.091 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.091 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.091 [2024-11-15 09:30:48.319292] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:00.091 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.091 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:00.091 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:00.091 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.091 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:00.091 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.091 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.091 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.092 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:00.092 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:00.092 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:00.092 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.092 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.092 [2024-11-15 09:30:48.496174] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:00.350 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.350 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:00.350 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:00.350 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.350 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:00.350 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.350 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.350 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.350 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:00.350 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:00.350 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:00.350 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.350 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.350 [2024-11-15 09:30:48.668574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:00.350 [2024-11-15 09:30:48.668797] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.350 [2024-11-15 09:30:48.788224] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.350 [2024-11-15 09:30:48.788421] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.350 [2024-11-15 09:30:48.788446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:00.350 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.350 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:00.350 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:00.351 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:00.351 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.351 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.351 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.351 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.610 BaseBdev2 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.610 [ 00:12:00.610 { 00:12:00.610 "name": "BaseBdev2", 00:12:00.610 "aliases": [ 00:12:00.610 "395fbb68-bec6-4ff8-95d3-72920ac29faa" 00:12:00.610 ], 00:12:00.610 "product_name": "Malloc disk", 00:12:00.610 "block_size": 512, 00:12:00.610 "num_blocks": 65536, 00:12:00.610 "uuid": "395fbb68-bec6-4ff8-95d3-72920ac29faa", 00:12:00.610 "assigned_rate_limits": { 00:12:00.610 "rw_ios_per_sec": 0, 00:12:00.610 "rw_mbytes_per_sec": 0, 00:12:00.610 "r_mbytes_per_sec": 0, 00:12:00.610 "w_mbytes_per_sec": 0 00:12:00.610 }, 00:12:00.610 "claimed": false, 00:12:00.610 "zoned": false, 00:12:00.610 "supported_io_types": { 00:12:00.610 "read": true, 00:12:00.610 "write": true, 00:12:00.610 "unmap": true, 00:12:00.610 "flush": true, 00:12:00.610 "reset": true, 00:12:00.610 "nvme_admin": false, 00:12:00.610 "nvme_io": false, 00:12:00.610 "nvme_io_md": false, 00:12:00.610 "write_zeroes": true, 00:12:00.610 "zcopy": true, 00:12:00.610 "get_zone_info": false, 00:12:00.610 "zone_management": false, 00:12:00.610 "zone_append": false, 00:12:00.610 "compare": false, 00:12:00.610 "compare_and_write": false, 00:12:00.610 "abort": true, 00:12:00.610 "seek_hole": false, 00:12:00.610 "seek_data": false, 00:12:00.610 "copy": true, 00:12:00.610 "nvme_iov_md": false 00:12:00.610 }, 00:12:00.610 "memory_domains": [ 00:12:00.610 { 00:12:00.610 "dma_device_id": "system", 00:12:00.610 "dma_device_type": 1 00:12:00.610 }, 00:12:00.610 { 00:12:00.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.610 "dma_device_type": 2 00:12:00.610 } 00:12:00.610 ], 00:12:00.610 "driver_specific": {} 00:12:00.610 } 00:12:00.610 ] 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.610 BaseBdev3 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:00.610 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:00.611 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:00.611 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:00.611 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:00.611 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:00.611 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.611 09:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.611 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.611 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:00.611 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.611 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.611 [ 00:12:00.611 { 00:12:00.611 "name": "BaseBdev3", 00:12:00.611 "aliases": [ 00:12:00.611 "c9edea74-7ce7-48e6-a259-c96183ccab99" 00:12:00.611 ], 00:12:00.611 "product_name": "Malloc disk", 00:12:00.611 "block_size": 512, 00:12:00.611 "num_blocks": 65536, 00:12:00.611 "uuid": "c9edea74-7ce7-48e6-a259-c96183ccab99", 00:12:00.611 "assigned_rate_limits": { 00:12:00.611 "rw_ios_per_sec": 0, 00:12:00.611 "rw_mbytes_per_sec": 0, 00:12:00.611 "r_mbytes_per_sec": 0, 00:12:00.611 "w_mbytes_per_sec": 0 00:12:00.611 }, 00:12:00.611 "claimed": false, 00:12:00.611 "zoned": false, 00:12:00.611 "supported_io_types": { 00:12:00.611 "read": true, 00:12:00.611 "write": true, 00:12:00.611 "unmap": true, 00:12:00.611 "flush": true, 00:12:00.611 "reset": true, 00:12:00.611 "nvme_admin": false, 00:12:00.611 "nvme_io": false, 00:12:00.611 "nvme_io_md": false, 00:12:00.611 "write_zeroes": true, 00:12:00.611 "zcopy": true, 00:12:00.611 "get_zone_info": false, 00:12:00.611 "zone_management": false, 00:12:00.611 "zone_append": false, 00:12:00.611 "compare": false, 00:12:00.611 "compare_and_write": false, 00:12:00.611 "abort": true, 00:12:00.611 "seek_hole": false, 00:12:00.611 "seek_data": false, 00:12:00.611 "copy": true, 00:12:00.611 "nvme_iov_md": false 00:12:00.611 }, 00:12:00.611 "memory_domains": [ 00:12:00.611 { 00:12:00.611 "dma_device_id": "system", 00:12:00.611 "dma_device_type": 1 00:12:00.611 }, 00:12:00.611 { 00:12:00.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.611 "dma_device_type": 2 00:12:00.611 } 00:12:00.611 ], 00:12:00.611 "driver_specific": {} 00:12:00.611 } 00:12:00.611 ] 00:12:00.611 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.611 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:00.611 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:00.611 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:00.611 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:00.611 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.611 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.869 BaseBdev4 00:12:00.869 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.869 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:00.869 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.870 [ 00:12:00.870 { 00:12:00.870 "name": "BaseBdev4", 00:12:00.870 "aliases": [ 00:12:00.870 "26f14efa-db93-4469-9d0b-390b167a6c47" 00:12:00.870 ], 00:12:00.870 "product_name": "Malloc disk", 00:12:00.870 "block_size": 512, 00:12:00.870 "num_blocks": 65536, 00:12:00.870 "uuid": "26f14efa-db93-4469-9d0b-390b167a6c47", 00:12:00.870 "assigned_rate_limits": { 00:12:00.870 "rw_ios_per_sec": 0, 00:12:00.870 "rw_mbytes_per_sec": 0, 00:12:00.870 "r_mbytes_per_sec": 0, 00:12:00.870 "w_mbytes_per_sec": 0 00:12:00.870 }, 00:12:00.870 "claimed": false, 00:12:00.870 "zoned": false, 00:12:00.870 "supported_io_types": { 00:12:00.870 "read": true, 00:12:00.870 "write": true, 00:12:00.870 "unmap": true, 00:12:00.870 "flush": true, 00:12:00.870 "reset": true, 00:12:00.870 "nvme_admin": false, 00:12:00.870 "nvme_io": false, 00:12:00.870 "nvme_io_md": false, 00:12:00.870 "write_zeroes": true, 00:12:00.870 "zcopy": true, 00:12:00.870 "get_zone_info": false, 00:12:00.870 "zone_management": false, 00:12:00.870 "zone_append": false, 00:12:00.870 "compare": false, 00:12:00.870 "compare_and_write": false, 00:12:00.870 "abort": true, 00:12:00.870 "seek_hole": false, 00:12:00.870 "seek_data": false, 00:12:00.870 "copy": true, 00:12:00.870 "nvme_iov_md": false 00:12:00.870 }, 00:12:00.870 "memory_domains": [ 00:12:00.870 { 00:12:00.870 "dma_device_id": "system", 00:12:00.870 "dma_device_type": 1 00:12:00.870 }, 00:12:00.870 { 00:12:00.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.870 "dma_device_type": 2 00:12:00.870 } 00:12:00.870 ], 00:12:00.870 "driver_specific": {} 00:12:00.870 } 00:12:00.870 ] 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.870 [2024-11-15 09:30:49.135129] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:00.870 [2024-11-15 09:30:49.135251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:00.870 [2024-11-15 09:30:49.135305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.870 [2024-11-15 09:30:49.137816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:00.870 [2024-11-15 09:30:49.137937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.870 "name": "Existed_Raid", 00:12:00.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.870 "strip_size_kb": 0, 00:12:00.870 "state": "configuring", 00:12:00.870 "raid_level": "raid1", 00:12:00.870 "superblock": false, 00:12:00.870 "num_base_bdevs": 4, 00:12:00.870 "num_base_bdevs_discovered": 3, 00:12:00.870 "num_base_bdevs_operational": 4, 00:12:00.870 "base_bdevs_list": [ 00:12:00.870 { 00:12:00.870 "name": "BaseBdev1", 00:12:00.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.870 "is_configured": false, 00:12:00.870 "data_offset": 0, 00:12:00.870 "data_size": 0 00:12:00.870 }, 00:12:00.870 { 00:12:00.870 "name": "BaseBdev2", 00:12:00.870 "uuid": "395fbb68-bec6-4ff8-95d3-72920ac29faa", 00:12:00.870 "is_configured": true, 00:12:00.870 "data_offset": 0, 00:12:00.870 "data_size": 65536 00:12:00.870 }, 00:12:00.870 { 00:12:00.870 "name": "BaseBdev3", 00:12:00.870 "uuid": "c9edea74-7ce7-48e6-a259-c96183ccab99", 00:12:00.870 "is_configured": true, 00:12:00.870 "data_offset": 0, 00:12:00.870 "data_size": 65536 00:12:00.870 }, 00:12:00.870 { 00:12:00.870 "name": "BaseBdev4", 00:12:00.870 "uuid": "26f14efa-db93-4469-9d0b-390b167a6c47", 00:12:00.870 "is_configured": true, 00:12:00.870 "data_offset": 0, 00:12:00.870 "data_size": 65536 00:12:00.870 } 00:12:00.870 ] 00:12:00.870 }' 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.870 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.438 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:01.438 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.438 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.438 [2024-11-15 09:30:49.622363] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:01.438 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.438 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:01.438 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.438 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.438 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.438 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.438 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.438 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.438 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.438 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.438 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.438 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.438 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.438 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.438 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.438 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.438 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.438 "name": "Existed_Raid", 00:12:01.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.438 "strip_size_kb": 0, 00:12:01.438 "state": "configuring", 00:12:01.438 "raid_level": "raid1", 00:12:01.438 "superblock": false, 00:12:01.438 "num_base_bdevs": 4, 00:12:01.438 "num_base_bdevs_discovered": 2, 00:12:01.438 "num_base_bdevs_operational": 4, 00:12:01.438 "base_bdevs_list": [ 00:12:01.438 { 00:12:01.438 "name": "BaseBdev1", 00:12:01.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.438 "is_configured": false, 00:12:01.438 "data_offset": 0, 00:12:01.438 "data_size": 0 00:12:01.438 }, 00:12:01.438 { 00:12:01.438 "name": null, 00:12:01.438 "uuid": "395fbb68-bec6-4ff8-95d3-72920ac29faa", 00:12:01.438 "is_configured": false, 00:12:01.438 "data_offset": 0, 00:12:01.438 "data_size": 65536 00:12:01.438 }, 00:12:01.438 { 00:12:01.438 "name": "BaseBdev3", 00:12:01.438 "uuid": "c9edea74-7ce7-48e6-a259-c96183ccab99", 00:12:01.438 "is_configured": true, 00:12:01.438 "data_offset": 0, 00:12:01.438 "data_size": 65536 00:12:01.438 }, 00:12:01.438 { 00:12:01.438 "name": "BaseBdev4", 00:12:01.438 "uuid": "26f14efa-db93-4469-9d0b-390b167a6c47", 00:12:01.438 "is_configured": true, 00:12:01.438 "data_offset": 0, 00:12:01.438 "data_size": 65536 00:12:01.438 } 00:12:01.438 ] 00:12:01.438 }' 00:12:01.438 09:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.438 09:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.696 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:01.696 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.696 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.696 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.696 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.696 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:01.696 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:01.696 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.696 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.955 [2024-11-15 09:30:50.171526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.955 BaseBdev1 00:12:01.955 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.955 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:01.955 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:01.955 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:01.955 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:01.955 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:01.955 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:01.955 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:01.955 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.955 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.955 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.955 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:01.955 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.955 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.955 [ 00:12:01.955 { 00:12:01.955 "name": "BaseBdev1", 00:12:01.955 "aliases": [ 00:12:01.955 "cf049c4f-d069-4ecc-8aba-aad3de4b7deb" 00:12:01.955 ], 00:12:01.955 "product_name": "Malloc disk", 00:12:01.955 "block_size": 512, 00:12:01.955 "num_blocks": 65536, 00:12:01.955 "uuid": "cf049c4f-d069-4ecc-8aba-aad3de4b7deb", 00:12:01.955 "assigned_rate_limits": { 00:12:01.955 "rw_ios_per_sec": 0, 00:12:01.955 "rw_mbytes_per_sec": 0, 00:12:01.955 "r_mbytes_per_sec": 0, 00:12:01.955 "w_mbytes_per_sec": 0 00:12:01.955 }, 00:12:01.955 "claimed": true, 00:12:01.955 "claim_type": "exclusive_write", 00:12:01.955 "zoned": false, 00:12:01.955 "supported_io_types": { 00:12:01.955 "read": true, 00:12:01.955 "write": true, 00:12:01.955 "unmap": true, 00:12:01.955 "flush": true, 00:12:01.955 "reset": true, 00:12:01.955 "nvme_admin": false, 00:12:01.955 "nvme_io": false, 00:12:01.955 "nvme_io_md": false, 00:12:01.955 "write_zeroes": true, 00:12:01.956 "zcopy": true, 00:12:01.956 "get_zone_info": false, 00:12:01.956 "zone_management": false, 00:12:01.956 "zone_append": false, 00:12:01.956 "compare": false, 00:12:01.956 "compare_and_write": false, 00:12:01.956 "abort": true, 00:12:01.956 "seek_hole": false, 00:12:01.956 "seek_data": false, 00:12:01.956 "copy": true, 00:12:01.956 "nvme_iov_md": false 00:12:01.956 }, 00:12:01.956 "memory_domains": [ 00:12:01.956 { 00:12:01.956 "dma_device_id": "system", 00:12:01.956 "dma_device_type": 1 00:12:01.956 }, 00:12:01.956 { 00:12:01.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.956 "dma_device_type": 2 00:12:01.956 } 00:12:01.956 ], 00:12:01.956 "driver_specific": {} 00:12:01.956 } 00:12:01.956 ] 00:12:01.956 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.956 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:01.956 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:01.956 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.956 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.956 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.956 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.956 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.956 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.956 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.956 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.956 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.956 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.956 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.956 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.956 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.956 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.956 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.956 "name": "Existed_Raid", 00:12:01.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.956 "strip_size_kb": 0, 00:12:01.956 "state": "configuring", 00:12:01.956 "raid_level": "raid1", 00:12:01.956 "superblock": false, 00:12:01.956 "num_base_bdevs": 4, 00:12:01.956 "num_base_bdevs_discovered": 3, 00:12:01.956 "num_base_bdevs_operational": 4, 00:12:01.956 "base_bdevs_list": [ 00:12:01.956 { 00:12:01.956 "name": "BaseBdev1", 00:12:01.956 "uuid": "cf049c4f-d069-4ecc-8aba-aad3de4b7deb", 00:12:01.956 "is_configured": true, 00:12:01.956 "data_offset": 0, 00:12:01.956 "data_size": 65536 00:12:01.956 }, 00:12:01.956 { 00:12:01.956 "name": null, 00:12:01.956 "uuid": "395fbb68-bec6-4ff8-95d3-72920ac29faa", 00:12:01.956 "is_configured": false, 00:12:01.956 "data_offset": 0, 00:12:01.956 "data_size": 65536 00:12:01.956 }, 00:12:01.956 { 00:12:01.956 "name": "BaseBdev3", 00:12:01.956 "uuid": "c9edea74-7ce7-48e6-a259-c96183ccab99", 00:12:01.956 "is_configured": true, 00:12:01.956 "data_offset": 0, 00:12:01.956 "data_size": 65536 00:12:01.956 }, 00:12:01.956 { 00:12:01.956 "name": "BaseBdev4", 00:12:01.956 "uuid": "26f14efa-db93-4469-9d0b-390b167a6c47", 00:12:01.956 "is_configured": true, 00:12:01.956 "data_offset": 0, 00:12:01.956 "data_size": 65536 00:12:01.956 } 00:12:01.956 ] 00:12:01.956 }' 00:12:01.956 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.956 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.524 [2024-11-15 09:30:50.758606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.524 "name": "Existed_Raid", 00:12:02.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.524 "strip_size_kb": 0, 00:12:02.524 "state": "configuring", 00:12:02.524 "raid_level": "raid1", 00:12:02.524 "superblock": false, 00:12:02.524 "num_base_bdevs": 4, 00:12:02.524 "num_base_bdevs_discovered": 2, 00:12:02.524 "num_base_bdevs_operational": 4, 00:12:02.524 "base_bdevs_list": [ 00:12:02.524 { 00:12:02.524 "name": "BaseBdev1", 00:12:02.524 "uuid": "cf049c4f-d069-4ecc-8aba-aad3de4b7deb", 00:12:02.524 "is_configured": true, 00:12:02.524 "data_offset": 0, 00:12:02.524 "data_size": 65536 00:12:02.524 }, 00:12:02.524 { 00:12:02.524 "name": null, 00:12:02.524 "uuid": "395fbb68-bec6-4ff8-95d3-72920ac29faa", 00:12:02.524 "is_configured": false, 00:12:02.524 "data_offset": 0, 00:12:02.524 "data_size": 65536 00:12:02.524 }, 00:12:02.524 { 00:12:02.524 "name": null, 00:12:02.524 "uuid": "c9edea74-7ce7-48e6-a259-c96183ccab99", 00:12:02.524 "is_configured": false, 00:12:02.524 "data_offset": 0, 00:12:02.524 "data_size": 65536 00:12:02.524 }, 00:12:02.524 { 00:12:02.524 "name": "BaseBdev4", 00:12:02.524 "uuid": "26f14efa-db93-4469-9d0b-390b167a6c47", 00:12:02.524 "is_configured": true, 00:12:02.524 "data_offset": 0, 00:12:02.524 "data_size": 65536 00:12:02.524 } 00:12:02.524 ] 00:12:02.524 }' 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.524 09:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.852 [2024-11-15 09:30:51.249790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.852 "name": "Existed_Raid", 00:12:02.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.852 "strip_size_kb": 0, 00:12:02.852 "state": "configuring", 00:12:02.852 "raid_level": "raid1", 00:12:02.852 "superblock": false, 00:12:02.852 "num_base_bdevs": 4, 00:12:02.852 "num_base_bdevs_discovered": 3, 00:12:02.852 "num_base_bdevs_operational": 4, 00:12:02.852 "base_bdevs_list": [ 00:12:02.852 { 00:12:02.852 "name": "BaseBdev1", 00:12:02.852 "uuid": "cf049c4f-d069-4ecc-8aba-aad3de4b7deb", 00:12:02.852 "is_configured": true, 00:12:02.852 "data_offset": 0, 00:12:02.852 "data_size": 65536 00:12:02.852 }, 00:12:02.852 { 00:12:02.852 "name": null, 00:12:02.852 "uuid": "395fbb68-bec6-4ff8-95d3-72920ac29faa", 00:12:02.852 "is_configured": false, 00:12:02.852 "data_offset": 0, 00:12:02.852 "data_size": 65536 00:12:02.852 }, 00:12:02.852 { 00:12:02.852 "name": "BaseBdev3", 00:12:02.852 "uuid": "c9edea74-7ce7-48e6-a259-c96183ccab99", 00:12:02.852 "is_configured": true, 00:12:02.852 "data_offset": 0, 00:12:02.852 "data_size": 65536 00:12:02.852 }, 00:12:02.852 { 00:12:02.852 "name": "BaseBdev4", 00:12:02.852 "uuid": "26f14efa-db93-4469-9d0b-390b167a6c47", 00:12:02.852 "is_configured": true, 00:12:02.852 "data_offset": 0, 00:12:02.852 "data_size": 65536 00:12:02.852 } 00:12:02.852 ] 00:12:02.852 }' 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.852 09:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.420 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:03.420 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.420 09:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.420 09:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.420 09:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.420 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:03.420 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:03.420 09:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.420 09:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.420 [2024-11-15 09:30:51.725085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:03.420 09:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.420 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:03.420 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.420 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.420 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.420 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.420 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.420 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.420 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.420 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.420 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.420 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.420 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.420 09:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.420 09:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.420 09:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.679 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.679 "name": "Existed_Raid", 00:12:03.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.679 "strip_size_kb": 0, 00:12:03.679 "state": "configuring", 00:12:03.679 "raid_level": "raid1", 00:12:03.679 "superblock": false, 00:12:03.679 "num_base_bdevs": 4, 00:12:03.679 "num_base_bdevs_discovered": 2, 00:12:03.680 "num_base_bdevs_operational": 4, 00:12:03.680 "base_bdevs_list": [ 00:12:03.680 { 00:12:03.680 "name": null, 00:12:03.680 "uuid": "cf049c4f-d069-4ecc-8aba-aad3de4b7deb", 00:12:03.680 "is_configured": false, 00:12:03.680 "data_offset": 0, 00:12:03.680 "data_size": 65536 00:12:03.680 }, 00:12:03.680 { 00:12:03.680 "name": null, 00:12:03.680 "uuid": "395fbb68-bec6-4ff8-95d3-72920ac29faa", 00:12:03.680 "is_configured": false, 00:12:03.680 "data_offset": 0, 00:12:03.680 "data_size": 65536 00:12:03.680 }, 00:12:03.680 { 00:12:03.680 "name": "BaseBdev3", 00:12:03.680 "uuid": "c9edea74-7ce7-48e6-a259-c96183ccab99", 00:12:03.680 "is_configured": true, 00:12:03.680 "data_offset": 0, 00:12:03.680 "data_size": 65536 00:12:03.680 }, 00:12:03.680 { 00:12:03.680 "name": "BaseBdev4", 00:12:03.680 "uuid": "26f14efa-db93-4469-9d0b-390b167a6c47", 00:12:03.680 "is_configured": true, 00:12:03.680 "data_offset": 0, 00:12:03.680 "data_size": 65536 00:12:03.680 } 00:12:03.680 ] 00:12:03.680 }' 00:12:03.680 09:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.680 09:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.939 09:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:03.939 09:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.939 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.939 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.939 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.939 09:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:03.939 09:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:03.939 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.939 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.939 [2024-11-15 09:30:52.359070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:03.939 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.939 09:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:03.939 09:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.939 09:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.939 09:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.939 09:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.939 09:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.939 09:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.939 09:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.939 09:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.939 09:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.939 09:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.939 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.939 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.939 09:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.939 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.199 09:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.199 "name": "Existed_Raid", 00:12:04.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.199 "strip_size_kb": 0, 00:12:04.199 "state": "configuring", 00:12:04.199 "raid_level": "raid1", 00:12:04.199 "superblock": false, 00:12:04.199 "num_base_bdevs": 4, 00:12:04.199 "num_base_bdevs_discovered": 3, 00:12:04.199 "num_base_bdevs_operational": 4, 00:12:04.199 "base_bdevs_list": [ 00:12:04.199 { 00:12:04.199 "name": null, 00:12:04.199 "uuid": "cf049c4f-d069-4ecc-8aba-aad3de4b7deb", 00:12:04.199 "is_configured": false, 00:12:04.199 "data_offset": 0, 00:12:04.199 "data_size": 65536 00:12:04.199 }, 00:12:04.199 { 00:12:04.199 "name": "BaseBdev2", 00:12:04.199 "uuid": "395fbb68-bec6-4ff8-95d3-72920ac29faa", 00:12:04.199 "is_configured": true, 00:12:04.199 "data_offset": 0, 00:12:04.199 "data_size": 65536 00:12:04.199 }, 00:12:04.199 { 00:12:04.199 "name": "BaseBdev3", 00:12:04.199 "uuid": "c9edea74-7ce7-48e6-a259-c96183ccab99", 00:12:04.199 "is_configured": true, 00:12:04.199 "data_offset": 0, 00:12:04.199 "data_size": 65536 00:12:04.199 }, 00:12:04.199 { 00:12:04.199 "name": "BaseBdev4", 00:12:04.199 "uuid": "26f14efa-db93-4469-9d0b-390b167a6c47", 00:12:04.199 "is_configured": true, 00:12:04.199 "data_offset": 0, 00:12:04.199 "data_size": 65536 00:12:04.199 } 00:12:04.199 ] 00:12:04.199 }' 00:12:04.199 09:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.199 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.459 09:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.459 09:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:04.459 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.459 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.459 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.459 09:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:04.459 09:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.459 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.459 09:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:04.459 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.459 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.719 09:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cf049c4f-d069-4ecc-8aba-aad3de4b7deb 00:12:04.719 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.719 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.719 [2024-11-15 09:30:52.968715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:04.719 [2024-11-15 09:30:52.968915] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:04.719 [2024-11-15 09:30:52.968948] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:04.719 [2024-11-15 09:30:52.969293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:04.719 [2024-11-15 09:30:52.969508] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:04.719 [2024-11-15 09:30:52.969551] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:04.719 [2024-11-15 09:30:52.969897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.719 NewBaseBdev 00:12:04.719 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.719 09:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:04.719 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:04.719 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:04.719 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:04.719 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:04.719 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:04.719 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:04.719 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.719 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.719 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.719 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:04.719 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.719 09:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.719 [ 00:12:04.719 { 00:12:04.719 "name": "NewBaseBdev", 00:12:04.719 "aliases": [ 00:12:04.719 "cf049c4f-d069-4ecc-8aba-aad3de4b7deb" 00:12:04.719 ], 00:12:04.719 "product_name": "Malloc disk", 00:12:04.719 "block_size": 512, 00:12:04.719 "num_blocks": 65536, 00:12:04.719 "uuid": "cf049c4f-d069-4ecc-8aba-aad3de4b7deb", 00:12:04.719 "assigned_rate_limits": { 00:12:04.719 "rw_ios_per_sec": 0, 00:12:04.719 "rw_mbytes_per_sec": 0, 00:12:04.719 "r_mbytes_per_sec": 0, 00:12:04.719 "w_mbytes_per_sec": 0 00:12:04.719 }, 00:12:04.719 "claimed": true, 00:12:04.719 "claim_type": "exclusive_write", 00:12:04.719 "zoned": false, 00:12:04.719 "supported_io_types": { 00:12:04.719 "read": true, 00:12:04.719 "write": true, 00:12:04.719 "unmap": true, 00:12:04.719 "flush": true, 00:12:04.719 "reset": true, 00:12:04.719 "nvme_admin": false, 00:12:04.719 "nvme_io": false, 00:12:04.719 "nvme_io_md": false, 00:12:04.719 "write_zeroes": true, 00:12:04.719 "zcopy": true, 00:12:04.719 "get_zone_info": false, 00:12:04.719 "zone_management": false, 00:12:04.719 "zone_append": false, 00:12:04.719 "compare": false, 00:12:04.719 "compare_and_write": false, 00:12:04.719 "abort": true, 00:12:04.719 "seek_hole": false, 00:12:04.719 "seek_data": false, 00:12:04.719 "copy": true, 00:12:04.719 "nvme_iov_md": false 00:12:04.719 }, 00:12:04.719 "memory_domains": [ 00:12:04.719 { 00:12:04.719 "dma_device_id": "system", 00:12:04.719 "dma_device_type": 1 00:12:04.719 }, 00:12:04.719 { 00:12:04.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.719 "dma_device_type": 2 00:12:04.719 } 00:12:04.719 ], 00:12:04.720 "driver_specific": {} 00:12:04.720 } 00:12:04.720 ] 00:12:04.720 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.720 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:04.720 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:04.720 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.720 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.720 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.720 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.720 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.720 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.720 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.720 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.720 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.720 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.720 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.720 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.720 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.720 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.720 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.720 "name": "Existed_Raid", 00:12:04.720 "uuid": "807689e4-a187-47dd-8a0b-f86e64bf635d", 00:12:04.720 "strip_size_kb": 0, 00:12:04.720 "state": "online", 00:12:04.720 "raid_level": "raid1", 00:12:04.720 "superblock": false, 00:12:04.720 "num_base_bdevs": 4, 00:12:04.720 "num_base_bdevs_discovered": 4, 00:12:04.720 "num_base_bdevs_operational": 4, 00:12:04.720 "base_bdevs_list": [ 00:12:04.720 { 00:12:04.720 "name": "NewBaseBdev", 00:12:04.720 "uuid": "cf049c4f-d069-4ecc-8aba-aad3de4b7deb", 00:12:04.720 "is_configured": true, 00:12:04.720 "data_offset": 0, 00:12:04.720 "data_size": 65536 00:12:04.720 }, 00:12:04.720 { 00:12:04.720 "name": "BaseBdev2", 00:12:04.720 "uuid": "395fbb68-bec6-4ff8-95d3-72920ac29faa", 00:12:04.720 "is_configured": true, 00:12:04.720 "data_offset": 0, 00:12:04.720 "data_size": 65536 00:12:04.720 }, 00:12:04.720 { 00:12:04.720 "name": "BaseBdev3", 00:12:04.720 "uuid": "c9edea74-7ce7-48e6-a259-c96183ccab99", 00:12:04.720 "is_configured": true, 00:12:04.720 "data_offset": 0, 00:12:04.720 "data_size": 65536 00:12:04.720 }, 00:12:04.720 { 00:12:04.720 "name": "BaseBdev4", 00:12:04.720 "uuid": "26f14efa-db93-4469-9d0b-390b167a6c47", 00:12:04.720 "is_configured": true, 00:12:04.720 "data_offset": 0, 00:12:04.720 "data_size": 65536 00:12:04.720 } 00:12:04.720 ] 00:12:04.720 }' 00:12:04.720 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.720 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.289 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:05.289 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:05.289 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:05.289 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:05.289 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:05.289 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:05.289 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:05.289 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:05.289 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.289 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.289 [2024-11-15 09:30:53.480936] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.289 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.289 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:05.289 "name": "Existed_Raid", 00:12:05.289 "aliases": [ 00:12:05.289 "807689e4-a187-47dd-8a0b-f86e64bf635d" 00:12:05.289 ], 00:12:05.289 "product_name": "Raid Volume", 00:12:05.289 "block_size": 512, 00:12:05.289 "num_blocks": 65536, 00:12:05.289 "uuid": "807689e4-a187-47dd-8a0b-f86e64bf635d", 00:12:05.289 "assigned_rate_limits": { 00:12:05.289 "rw_ios_per_sec": 0, 00:12:05.289 "rw_mbytes_per_sec": 0, 00:12:05.289 "r_mbytes_per_sec": 0, 00:12:05.289 "w_mbytes_per_sec": 0 00:12:05.289 }, 00:12:05.289 "claimed": false, 00:12:05.289 "zoned": false, 00:12:05.289 "supported_io_types": { 00:12:05.289 "read": true, 00:12:05.289 "write": true, 00:12:05.289 "unmap": false, 00:12:05.289 "flush": false, 00:12:05.289 "reset": true, 00:12:05.289 "nvme_admin": false, 00:12:05.289 "nvme_io": false, 00:12:05.289 "nvme_io_md": false, 00:12:05.289 "write_zeroes": true, 00:12:05.289 "zcopy": false, 00:12:05.289 "get_zone_info": false, 00:12:05.289 "zone_management": false, 00:12:05.289 "zone_append": false, 00:12:05.289 "compare": false, 00:12:05.289 "compare_and_write": false, 00:12:05.289 "abort": false, 00:12:05.289 "seek_hole": false, 00:12:05.289 "seek_data": false, 00:12:05.289 "copy": false, 00:12:05.289 "nvme_iov_md": false 00:12:05.289 }, 00:12:05.289 "memory_domains": [ 00:12:05.289 { 00:12:05.289 "dma_device_id": "system", 00:12:05.289 "dma_device_type": 1 00:12:05.289 }, 00:12:05.289 { 00:12:05.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.289 "dma_device_type": 2 00:12:05.289 }, 00:12:05.289 { 00:12:05.289 "dma_device_id": "system", 00:12:05.289 "dma_device_type": 1 00:12:05.289 }, 00:12:05.289 { 00:12:05.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.289 "dma_device_type": 2 00:12:05.289 }, 00:12:05.289 { 00:12:05.289 "dma_device_id": "system", 00:12:05.289 "dma_device_type": 1 00:12:05.289 }, 00:12:05.289 { 00:12:05.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.289 "dma_device_type": 2 00:12:05.289 }, 00:12:05.289 { 00:12:05.289 "dma_device_id": "system", 00:12:05.289 "dma_device_type": 1 00:12:05.289 }, 00:12:05.289 { 00:12:05.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.289 "dma_device_type": 2 00:12:05.289 } 00:12:05.289 ], 00:12:05.289 "driver_specific": { 00:12:05.289 "raid": { 00:12:05.289 "uuid": "807689e4-a187-47dd-8a0b-f86e64bf635d", 00:12:05.289 "strip_size_kb": 0, 00:12:05.289 "state": "online", 00:12:05.289 "raid_level": "raid1", 00:12:05.289 "superblock": false, 00:12:05.289 "num_base_bdevs": 4, 00:12:05.289 "num_base_bdevs_discovered": 4, 00:12:05.289 "num_base_bdevs_operational": 4, 00:12:05.289 "base_bdevs_list": [ 00:12:05.289 { 00:12:05.289 "name": "NewBaseBdev", 00:12:05.289 "uuid": "cf049c4f-d069-4ecc-8aba-aad3de4b7deb", 00:12:05.289 "is_configured": true, 00:12:05.289 "data_offset": 0, 00:12:05.289 "data_size": 65536 00:12:05.289 }, 00:12:05.289 { 00:12:05.289 "name": "BaseBdev2", 00:12:05.289 "uuid": "395fbb68-bec6-4ff8-95d3-72920ac29faa", 00:12:05.289 "is_configured": true, 00:12:05.289 "data_offset": 0, 00:12:05.289 "data_size": 65536 00:12:05.289 }, 00:12:05.289 { 00:12:05.289 "name": "BaseBdev3", 00:12:05.289 "uuid": "c9edea74-7ce7-48e6-a259-c96183ccab99", 00:12:05.289 "is_configured": true, 00:12:05.289 "data_offset": 0, 00:12:05.289 "data_size": 65536 00:12:05.290 }, 00:12:05.290 { 00:12:05.290 "name": "BaseBdev4", 00:12:05.290 "uuid": "26f14efa-db93-4469-9d0b-390b167a6c47", 00:12:05.290 "is_configured": true, 00:12:05.290 "data_offset": 0, 00:12:05.290 "data_size": 65536 00:12:05.290 } 00:12:05.290 ] 00:12:05.290 } 00:12:05.290 } 00:12:05.290 }' 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:05.290 BaseBdev2 00:12:05.290 BaseBdev3 00:12:05.290 BaseBdev4' 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.290 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.550 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.550 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.550 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.550 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.550 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:05.550 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.550 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.550 [2024-11-15 09:30:53.807416] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:05.550 [2024-11-15 09:30:53.807523] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.550 [2024-11-15 09:30:53.807681] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.550 [2024-11-15 09:30:53.808079] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.550 [2024-11-15 09:30:53.808099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:05.550 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.550 09:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73569 00:12:05.550 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 73569 ']' 00:12:05.550 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 73569 00:12:05.550 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:12:05.550 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:05.550 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73569 00:12:05.550 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:05.550 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:05.550 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73569' 00:12:05.550 killing process with pid 73569 00:12:05.550 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 73569 00:12:05.550 [2024-11-15 09:30:53.858740] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:05.550 09:30:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 73569 00:12:06.142 [2024-11-15 09:30:54.313342] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:07.088 09:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:07.088 00:12:07.088 real 0m12.198s 00:12:07.088 user 0m19.073s 00:12:07.088 sys 0m2.341s 00:12:07.088 09:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:07.088 09:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.088 ************************************ 00:12:07.088 END TEST raid_state_function_test 00:12:07.088 ************************************ 00:12:07.347 09:30:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:07.347 09:30:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:07.347 09:30:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:07.347 09:30:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:07.347 ************************************ 00:12:07.347 START TEST raid_state_function_test_sb 00:12:07.347 ************************************ 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 true 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74246 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:07.347 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74246' 00:12:07.347 Process raid pid: 74246 00:12:07.348 09:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74246 00:12:07.348 09:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 74246 ']' 00:12:07.348 09:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.348 09:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:07.348 09:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.348 09:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:07.348 09:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.348 [2024-11-15 09:30:55.698753] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:12:07.348 [2024-11-15 09:30:55.699028] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.607 [2024-11-15 09:30:55.883607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.607 [2024-11-15 09:30:56.013252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.866 [2024-11-15 09:30:56.235880] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:07.866 [2024-11-15 09:30:56.236031] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.125 09:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:08.125 09:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:12:08.125 09:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:08.125 09:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.125 09:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.125 [2024-11-15 09:30:56.587482] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:08.125 [2024-11-15 09:30:56.587557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:08.125 [2024-11-15 09:30:56.587569] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:08.125 [2024-11-15 09:30:56.587579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:08.125 [2024-11-15 09:30:56.587586] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:08.125 [2024-11-15 09:30:56.587595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:08.125 [2024-11-15 09:30:56.587601] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:08.125 [2024-11-15 09:30:56.587610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:08.384 09:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.384 09:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:08.384 09:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.384 09:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.384 09:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.384 09:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.384 09:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.384 09:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.384 09:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.384 09:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.384 09:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.384 09:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.384 09:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.384 09:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.384 09:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.384 09:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.384 09:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.384 "name": "Existed_Raid", 00:12:08.384 "uuid": "e891f858-a6a8-447e-9aac-380cb81c97b9", 00:12:08.384 "strip_size_kb": 0, 00:12:08.384 "state": "configuring", 00:12:08.384 "raid_level": "raid1", 00:12:08.384 "superblock": true, 00:12:08.384 "num_base_bdevs": 4, 00:12:08.384 "num_base_bdevs_discovered": 0, 00:12:08.384 "num_base_bdevs_operational": 4, 00:12:08.384 "base_bdevs_list": [ 00:12:08.384 { 00:12:08.384 "name": "BaseBdev1", 00:12:08.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.384 "is_configured": false, 00:12:08.384 "data_offset": 0, 00:12:08.384 "data_size": 0 00:12:08.384 }, 00:12:08.384 { 00:12:08.384 "name": "BaseBdev2", 00:12:08.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.384 "is_configured": false, 00:12:08.384 "data_offset": 0, 00:12:08.384 "data_size": 0 00:12:08.384 }, 00:12:08.384 { 00:12:08.384 "name": "BaseBdev3", 00:12:08.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.384 "is_configured": false, 00:12:08.384 "data_offset": 0, 00:12:08.384 "data_size": 0 00:12:08.384 }, 00:12:08.384 { 00:12:08.384 "name": "BaseBdev4", 00:12:08.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.384 "is_configured": false, 00:12:08.384 "data_offset": 0, 00:12:08.384 "data_size": 0 00:12:08.384 } 00:12:08.384 ] 00:12:08.384 }' 00:12:08.384 09:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.384 09:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.642 09:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:08.642 09:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.642 09:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.642 [2024-11-15 09:30:56.994729] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:08.642 [2024-11-15 09:30:56.994890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:08.642 09:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.642 09:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:08.642 09:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.642 09:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.642 [2024-11-15 09:30:57.006703] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:08.642 [2024-11-15 09:30:57.006826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:08.642 [2024-11-15 09:30:57.006873] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:08.642 [2024-11-15 09:30:57.006905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:08.642 [2024-11-15 09:30:57.006927] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:08.642 [2024-11-15 09:30:57.006951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:08.642 [2024-11-15 09:30:57.006972] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:08.642 [2024-11-15 09:30:57.006996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:08.642 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.642 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:08.642 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.642 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.642 [2024-11-15 09:30:57.058529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:08.642 BaseBdev1 00:12:08.642 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.642 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:08.642 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:08.642 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:08.642 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:08.642 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:08.642 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:08.642 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:08.642 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.642 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.642 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.642 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:08.643 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.643 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.643 [ 00:12:08.643 { 00:12:08.643 "name": "BaseBdev1", 00:12:08.643 "aliases": [ 00:12:08.643 "a7cbac9d-54c4-45ef-977f-63bc53074087" 00:12:08.643 ], 00:12:08.643 "product_name": "Malloc disk", 00:12:08.643 "block_size": 512, 00:12:08.643 "num_blocks": 65536, 00:12:08.643 "uuid": "a7cbac9d-54c4-45ef-977f-63bc53074087", 00:12:08.643 "assigned_rate_limits": { 00:12:08.643 "rw_ios_per_sec": 0, 00:12:08.643 "rw_mbytes_per_sec": 0, 00:12:08.643 "r_mbytes_per_sec": 0, 00:12:08.643 "w_mbytes_per_sec": 0 00:12:08.643 }, 00:12:08.643 "claimed": true, 00:12:08.643 "claim_type": "exclusive_write", 00:12:08.643 "zoned": false, 00:12:08.643 "supported_io_types": { 00:12:08.643 "read": true, 00:12:08.643 "write": true, 00:12:08.643 "unmap": true, 00:12:08.643 "flush": true, 00:12:08.643 "reset": true, 00:12:08.643 "nvme_admin": false, 00:12:08.643 "nvme_io": false, 00:12:08.643 "nvme_io_md": false, 00:12:08.643 "write_zeroes": true, 00:12:08.643 "zcopy": true, 00:12:08.643 "get_zone_info": false, 00:12:08.643 "zone_management": false, 00:12:08.643 "zone_append": false, 00:12:08.643 "compare": false, 00:12:08.643 "compare_and_write": false, 00:12:08.643 "abort": true, 00:12:08.643 "seek_hole": false, 00:12:08.643 "seek_data": false, 00:12:08.643 "copy": true, 00:12:08.643 "nvme_iov_md": false 00:12:08.643 }, 00:12:08.643 "memory_domains": [ 00:12:08.643 { 00:12:08.643 "dma_device_id": "system", 00:12:08.643 "dma_device_type": 1 00:12:08.643 }, 00:12:08.643 { 00:12:08.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.643 "dma_device_type": 2 00:12:08.643 } 00:12:08.643 ], 00:12:08.643 "driver_specific": {} 00:12:08.643 } 00:12:08.643 ] 00:12:08.643 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.643 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:08.643 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:08.643 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.643 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.643 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.643 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.643 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.643 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.643 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.643 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.643 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.643 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.643 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.643 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.643 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.902 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.902 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.902 "name": "Existed_Raid", 00:12:08.902 "uuid": "f98921f0-c202-426c-9184-b8dcc716768d", 00:12:08.902 "strip_size_kb": 0, 00:12:08.902 "state": "configuring", 00:12:08.902 "raid_level": "raid1", 00:12:08.902 "superblock": true, 00:12:08.902 "num_base_bdevs": 4, 00:12:08.902 "num_base_bdevs_discovered": 1, 00:12:08.902 "num_base_bdevs_operational": 4, 00:12:08.902 "base_bdevs_list": [ 00:12:08.902 { 00:12:08.902 "name": "BaseBdev1", 00:12:08.902 "uuid": "a7cbac9d-54c4-45ef-977f-63bc53074087", 00:12:08.902 "is_configured": true, 00:12:08.902 "data_offset": 2048, 00:12:08.902 "data_size": 63488 00:12:08.902 }, 00:12:08.902 { 00:12:08.902 "name": "BaseBdev2", 00:12:08.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.902 "is_configured": false, 00:12:08.902 "data_offset": 0, 00:12:08.902 "data_size": 0 00:12:08.902 }, 00:12:08.902 { 00:12:08.902 "name": "BaseBdev3", 00:12:08.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.902 "is_configured": false, 00:12:08.902 "data_offset": 0, 00:12:08.902 "data_size": 0 00:12:08.902 }, 00:12:08.902 { 00:12:08.902 "name": "BaseBdev4", 00:12:08.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.902 "is_configured": false, 00:12:08.902 "data_offset": 0, 00:12:08.902 "data_size": 0 00:12:08.902 } 00:12:08.902 ] 00:12:08.902 }' 00:12:08.902 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.902 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.161 [2024-11-15 09:30:57.513926] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:09.161 [2024-11-15 09:30:57.513998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.161 [2024-11-15 09:30:57.525936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:09.161 [2024-11-15 09:30:57.527666] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:09.161 [2024-11-15 09:30:57.527775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:09.161 [2024-11-15 09:30:57.527790] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:09.161 [2024-11-15 09:30:57.527801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:09.161 [2024-11-15 09:30:57.527808] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:09.161 [2024-11-15 09:30:57.527816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.161 "name": "Existed_Raid", 00:12:09.161 "uuid": "5bf4cfec-6ccb-4727-a0bf-12de4c239d8d", 00:12:09.161 "strip_size_kb": 0, 00:12:09.161 "state": "configuring", 00:12:09.161 "raid_level": "raid1", 00:12:09.161 "superblock": true, 00:12:09.161 "num_base_bdevs": 4, 00:12:09.161 "num_base_bdevs_discovered": 1, 00:12:09.161 "num_base_bdevs_operational": 4, 00:12:09.161 "base_bdevs_list": [ 00:12:09.161 { 00:12:09.161 "name": "BaseBdev1", 00:12:09.161 "uuid": "a7cbac9d-54c4-45ef-977f-63bc53074087", 00:12:09.161 "is_configured": true, 00:12:09.161 "data_offset": 2048, 00:12:09.161 "data_size": 63488 00:12:09.161 }, 00:12:09.161 { 00:12:09.161 "name": "BaseBdev2", 00:12:09.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.161 "is_configured": false, 00:12:09.161 "data_offset": 0, 00:12:09.161 "data_size": 0 00:12:09.161 }, 00:12:09.161 { 00:12:09.161 "name": "BaseBdev3", 00:12:09.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.161 "is_configured": false, 00:12:09.161 "data_offset": 0, 00:12:09.161 "data_size": 0 00:12:09.161 }, 00:12:09.161 { 00:12:09.161 "name": "BaseBdev4", 00:12:09.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.161 "is_configured": false, 00:12:09.161 "data_offset": 0, 00:12:09.161 "data_size": 0 00:12:09.161 } 00:12:09.161 ] 00:12:09.161 }' 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.161 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.731 09:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:09.731 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.731 09:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.731 [2024-11-15 09:30:58.041404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:09.731 BaseBdev2 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.731 [ 00:12:09.731 { 00:12:09.731 "name": "BaseBdev2", 00:12:09.731 "aliases": [ 00:12:09.731 "13015d03-7246-4256-9f10-fbf4b91ce286" 00:12:09.731 ], 00:12:09.731 "product_name": "Malloc disk", 00:12:09.731 "block_size": 512, 00:12:09.731 "num_blocks": 65536, 00:12:09.731 "uuid": "13015d03-7246-4256-9f10-fbf4b91ce286", 00:12:09.731 "assigned_rate_limits": { 00:12:09.731 "rw_ios_per_sec": 0, 00:12:09.731 "rw_mbytes_per_sec": 0, 00:12:09.731 "r_mbytes_per_sec": 0, 00:12:09.731 "w_mbytes_per_sec": 0 00:12:09.731 }, 00:12:09.731 "claimed": true, 00:12:09.731 "claim_type": "exclusive_write", 00:12:09.731 "zoned": false, 00:12:09.731 "supported_io_types": { 00:12:09.731 "read": true, 00:12:09.731 "write": true, 00:12:09.731 "unmap": true, 00:12:09.731 "flush": true, 00:12:09.731 "reset": true, 00:12:09.731 "nvme_admin": false, 00:12:09.731 "nvme_io": false, 00:12:09.731 "nvme_io_md": false, 00:12:09.731 "write_zeroes": true, 00:12:09.731 "zcopy": true, 00:12:09.731 "get_zone_info": false, 00:12:09.731 "zone_management": false, 00:12:09.731 "zone_append": false, 00:12:09.731 "compare": false, 00:12:09.731 "compare_and_write": false, 00:12:09.731 "abort": true, 00:12:09.731 "seek_hole": false, 00:12:09.731 "seek_data": false, 00:12:09.731 "copy": true, 00:12:09.731 "nvme_iov_md": false 00:12:09.731 }, 00:12:09.731 "memory_domains": [ 00:12:09.731 { 00:12:09.731 "dma_device_id": "system", 00:12:09.731 "dma_device_type": 1 00:12:09.731 }, 00:12:09.731 { 00:12:09.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.731 "dma_device_type": 2 00:12:09.731 } 00:12:09.731 ], 00:12:09.731 "driver_specific": {} 00:12:09.731 } 00:12:09.731 ] 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.731 "name": "Existed_Raid", 00:12:09.731 "uuid": "5bf4cfec-6ccb-4727-a0bf-12de4c239d8d", 00:12:09.731 "strip_size_kb": 0, 00:12:09.731 "state": "configuring", 00:12:09.731 "raid_level": "raid1", 00:12:09.731 "superblock": true, 00:12:09.731 "num_base_bdevs": 4, 00:12:09.731 "num_base_bdevs_discovered": 2, 00:12:09.731 "num_base_bdevs_operational": 4, 00:12:09.731 "base_bdevs_list": [ 00:12:09.731 { 00:12:09.731 "name": "BaseBdev1", 00:12:09.731 "uuid": "a7cbac9d-54c4-45ef-977f-63bc53074087", 00:12:09.731 "is_configured": true, 00:12:09.731 "data_offset": 2048, 00:12:09.731 "data_size": 63488 00:12:09.731 }, 00:12:09.731 { 00:12:09.731 "name": "BaseBdev2", 00:12:09.731 "uuid": "13015d03-7246-4256-9f10-fbf4b91ce286", 00:12:09.731 "is_configured": true, 00:12:09.731 "data_offset": 2048, 00:12:09.731 "data_size": 63488 00:12:09.731 }, 00:12:09.731 { 00:12:09.731 "name": "BaseBdev3", 00:12:09.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.731 "is_configured": false, 00:12:09.731 "data_offset": 0, 00:12:09.731 "data_size": 0 00:12:09.731 }, 00:12:09.731 { 00:12:09.731 "name": "BaseBdev4", 00:12:09.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.731 "is_configured": false, 00:12:09.731 "data_offset": 0, 00:12:09.731 "data_size": 0 00:12:09.731 } 00:12:09.731 ] 00:12:09.731 }' 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.731 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.301 [2024-11-15 09:30:58.572281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:10.301 BaseBdev3 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.301 [ 00:12:10.301 { 00:12:10.301 "name": "BaseBdev3", 00:12:10.301 "aliases": [ 00:12:10.301 "c954eaff-872e-4cb0-b64b-deca2c782895" 00:12:10.301 ], 00:12:10.301 "product_name": "Malloc disk", 00:12:10.301 "block_size": 512, 00:12:10.301 "num_blocks": 65536, 00:12:10.301 "uuid": "c954eaff-872e-4cb0-b64b-deca2c782895", 00:12:10.301 "assigned_rate_limits": { 00:12:10.301 "rw_ios_per_sec": 0, 00:12:10.301 "rw_mbytes_per_sec": 0, 00:12:10.301 "r_mbytes_per_sec": 0, 00:12:10.301 "w_mbytes_per_sec": 0 00:12:10.301 }, 00:12:10.301 "claimed": true, 00:12:10.301 "claim_type": "exclusive_write", 00:12:10.301 "zoned": false, 00:12:10.301 "supported_io_types": { 00:12:10.301 "read": true, 00:12:10.301 "write": true, 00:12:10.301 "unmap": true, 00:12:10.301 "flush": true, 00:12:10.301 "reset": true, 00:12:10.301 "nvme_admin": false, 00:12:10.301 "nvme_io": false, 00:12:10.301 "nvme_io_md": false, 00:12:10.301 "write_zeroes": true, 00:12:10.301 "zcopy": true, 00:12:10.301 "get_zone_info": false, 00:12:10.301 "zone_management": false, 00:12:10.301 "zone_append": false, 00:12:10.301 "compare": false, 00:12:10.301 "compare_and_write": false, 00:12:10.301 "abort": true, 00:12:10.301 "seek_hole": false, 00:12:10.301 "seek_data": false, 00:12:10.301 "copy": true, 00:12:10.301 "nvme_iov_md": false 00:12:10.301 }, 00:12:10.301 "memory_domains": [ 00:12:10.301 { 00:12:10.301 "dma_device_id": "system", 00:12:10.301 "dma_device_type": 1 00:12:10.301 }, 00:12:10.301 { 00:12:10.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.301 "dma_device_type": 2 00:12:10.301 } 00:12:10.301 ], 00:12:10.301 "driver_specific": {} 00:12:10.301 } 00:12:10.301 ] 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.301 "name": "Existed_Raid", 00:12:10.301 "uuid": "5bf4cfec-6ccb-4727-a0bf-12de4c239d8d", 00:12:10.301 "strip_size_kb": 0, 00:12:10.301 "state": "configuring", 00:12:10.301 "raid_level": "raid1", 00:12:10.301 "superblock": true, 00:12:10.301 "num_base_bdevs": 4, 00:12:10.301 "num_base_bdevs_discovered": 3, 00:12:10.301 "num_base_bdevs_operational": 4, 00:12:10.301 "base_bdevs_list": [ 00:12:10.301 { 00:12:10.301 "name": "BaseBdev1", 00:12:10.301 "uuid": "a7cbac9d-54c4-45ef-977f-63bc53074087", 00:12:10.301 "is_configured": true, 00:12:10.301 "data_offset": 2048, 00:12:10.301 "data_size": 63488 00:12:10.301 }, 00:12:10.301 { 00:12:10.301 "name": "BaseBdev2", 00:12:10.301 "uuid": "13015d03-7246-4256-9f10-fbf4b91ce286", 00:12:10.301 "is_configured": true, 00:12:10.301 "data_offset": 2048, 00:12:10.301 "data_size": 63488 00:12:10.301 }, 00:12:10.301 { 00:12:10.301 "name": "BaseBdev3", 00:12:10.301 "uuid": "c954eaff-872e-4cb0-b64b-deca2c782895", 00:12:10.301 "is_configured": true, 00:12:10.301 "data_offset": 2048, 00:12:10.301 "data_size": 63488 00:12:10.301 }, 00:12:10.301 { 00:12:10.301 "name": "BaseBdev4", 00:12:10.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.301 "is_configured": false, 00:12:10.301 "data_offset": 0, 00:12:10.301 "data_size": 0 00:12:10.301 } 00:12:10.301 ] 00:12:10.301 }' 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.301 09:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.872 [2024-11-15 09:30:59.095390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:10.872 [2024-11-15 09:30:59.095768] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:10.872 [2024-11-15 09:30:59.095821] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:10.872 [2024-11-15 09:30:59.096190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:10.872 [2024-11-15 09:30:59.096392] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:10.872 [2024-11-15 09:30:59.096443] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:10.872 BaseBdev4 00:12:10.872 [2024-11-15 09:30:59.096632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.872 [ 00:12:10.872 { 00:12:10.872 "name": "BaseBdev4", 00:12:10.872 "aliases": [ 00:12:10.872 "1216c390-7b10-4b4a-a8a8-79b92ae40b22" 00:12:10.872 ], 00:12:10.872 "product_name": "Malloc disk", 00:12:10.872 "block_size": 512, 00:12:10.872 "num_blocks": 65536, 00:12:10.872 "uuid": "1216c390-7b10-4b4a-a8a8-79b92ae40b22", 00:12:10.872 "assigned_rate_limits": { 00:12:10.872 "rw_ios_per_sec": 0, 00:12:10.872 "rw_mbytes_per_sec": 0, 00:12:10.872 "r_mbytes_per_sec": 0, 00:12:10.872 "w_mbytes_per_sec": 0 00:12:10.872 }, 00:12:10.872 "claimed": true, 00:12:10.872 "claim_type": "exclusive_write", 00:12:10.872 "zoned": false, 00:12:10.872 "supported_io_types": { 00:12:10.872 "read": true, 00:12:10.872 "write": true, 00:12:10.872 "unmap": true, 00:12:10.872 "flush": true, 00:12:10.872 "reset": true, 00:12:10.872 "nvme_admin": false, 00:12:10.872 "nvme_io": false, 00:12:10.872 "nvme_io_md": false, 00:12:10.872 "write_zeroes": true, 00:12:10.872 "zcopy": true, 00:12:10.872 "get_zone_info": false, 00:12:10.872 "zone_management": false, 00:12:10.872 "zone_append": false, 00:12:10.872 "compare": false, 00:12:10.872 "compare_and_write": false, 00:12:10.872 "abort": true, 00:12:10.872 "seek_hole": false, 00:12:10.872 "seek_data": false, 00:12:10.872 "copy": true, 00:12:10.872 "nvme_iov_md": false 00:12:10.872 }, 00:12:10.872 "memory_domains": [ 00:12:10.872 { 00:12:10.872 "dma_device_id": "system", 00:12:10.872 "dma_device_type": 1 00:12:10.872 }, 00:12:10.872 { 00:12:10.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.872 "dma_device_type": 2 00:12:10.872 } 00:12:10.872 ], 00:12:10.872 "driver_specific": {} 00:12:10.872 } 00:12:10.872 ] 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.872 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.872 "name": "Existed_Raid", 00:12:10.872 "uuid": "5bf4cfec-6ccb-4727-a0bf-12de4c239d8d", 00:12:10.872 "strip_size_kb": 0, 00:12:10.873 "state": "online", 00:12:10.873 "raid_level": "raid1", 00:12:10.873 "superblock": true, 00:12:10.873 "num_base_bdevs": 4, 00:12:10.873 "num_base_bdevs_discovered": 4, 00:12:10.873 "num_base_bdevs_operational": 4, 00:12:10.873 "base_bdevs_list": [ 00:12:10.873 { 00:12:10.873 "name": "BaseBdev1", 00:12:10.873 "uuid": "a7cbac9d-54c4-45ef-977f-63bc53074087", 00:12:10.873 "is_configured": true, 00:12:10.873 "data_offset": 2048, 00:12:10.873 "data_size": 63488 00:12:10.873 }, 00:12:10.873 { 00:12:10.873 "name": "BaseBdev2", 00:12:10.873 "uuid": "13015d03-7246-4256-9f10-fbf4b91ce286", 00:12:10.873 "is_configured": true, 00:12:10.873 "data_offset": 2048, 00:12:10.873 "data_size": 63488 00:12:10.873 }, 00:12:10.873 { 00:12:10.873 "name": "BaseBdev3", 00:12:10.873 "uuid": "c954eaff-872e-4cb0-b64b-deca2c782895", 00:12:10.873 "is_configured": true, 00:12:10.873 "data_offset": 2048, 00:12:10.873 "data_size": 63488 00:12:10.873 }, 00:12:10.873 { 00:12:10.873 "name": "BaseBdev4", 00:12:10.873 "uuid": "1216c390-7b10-4b4a-a8a8-79b92ae40b22", 00:12:10.873 "is_configured": true, 00:12:10.873 "data_offset": 2048, 00:12:10.873 "data_size": 63488 00:12:10.873 } 00:12:10.873 ] 00:12:10.873 }' 00:12:10.873 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.873 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.444 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:11.444 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:11.444 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:11.444 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:11.444 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:11.444 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:11.444 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:11.444 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:11.444 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.444 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.444 [2024-11-15 09:30:59.650950] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:11.444 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.444 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:11.444 "name": "Existed_Raid", 00:12:11.444 "aliases": [ 00:12:11.444 "5bf4cfec-6ccb-4727-a0bf-12de4c239d8d" 00:12:11.444 ], 00:12:11.444 "product_name": "Raid Volume", 00:12:11.444 "block_size": 512, 00:12:11.444 "num_blocks": 63488, 00:12:11.444 "uuid": "5bf4cfec-6ccb-4727-a0bf-12de4c239d8d", 00:12:11.444 "assigned_rate_limits": { 00:12:11.444 "rw_ios_per_sec": 0, 00:12:11.444 "rw_mbytes_per_sec": 0, 00:12:11.444 "r_mbytes_per_sec": 0, 00:12:11.444 "w_mbytes_per_sec": 0 00:12:11.444 }, 00:12:11.444 "claimed": false, 00:12:11.444 "zoned": false, 00:12:11.444 "supported_io_types": { 00:12:11.444 "read": true, 00:12:11.444 "write": true, 00:12:11.444 "unmap": false, 00:12:11.444 "flush": false, 00:12:11.444 "reset": true, 00:12:11.444 "nvme_admin": false, 00:12:11.444 "nvme_io": false, 00:12:11.444 "nvme_io_md": false, 00:12:11.444 "write_zeroes": true, 00:12:11.444 "zcopy": false, 00:12:11.444 "get_zone_info": false, 00:12:11.444 "zone_management": false, 00:12:11.444 "zone_append": false, 00:12:11.444 "compare": false, 00:12:11.444 "compare_and_write": false, 00:12:11.444 "abort": false, 00:12:11.444 "seek_hole": false, 00:12:11.444 "seek_data": false, 00:12:11.444 "copy": false, 00:12:11.444 "nvme_iov_md": false 00:12:11.444 }, 00:12:11.444 "memory_domains": [ 00:12:11.444 { 00:12:11.444 "dma_device_id": "system", 00:12:11.444 "dma_device_type": 1 00:12:11.444 }, 00:12:11.444 { 00:12:11.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.444 "dma_device_type": 2 00:12:11.444 }, 00:12:11.444 { 00:12:11.444 "dma_device_id": "system", 00:12:11.444 "dma_device_type": 1 00:12:11.444 }, 00:12:11.444 { 00:12:11.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.444 "dma_device_type": 2 00:12:11.444 }, 00:12:11.444 { 00:12:11.444 "dma_device_id": "system", 00:12:11.444 "dma_device_type": 1 00:12:11.444 }, 00:12:11.444 { 00:12:11.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.444 "dma_device_type": 2 00:12:11.444 }, 00:12:11.444 { 00:12:11.444 "dma_device_id": "system", 00:12:11.444 "dma_device_type": 1 00:12:11.444 }, 00:12:11.444 { 00:12:11.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.444 "dma_device_type": 2 00:12:11.444 } 00:12:11.444 ], 00:12:11.444 "driver_specific": { 00:12:11.444 "raid": { 00:12:11.444 "uuid": "5bf4cfec-6ccb-4727-a0bf-12de4c239d8d", 00:12:11.444 "strip_size_kb": 0, 00:12:11.444 "state": "online", 00:12:11.444 "raid_level": "raid1", 00:12:11.444 "superblock": true, 00:12:11.444 "num_base_bdevs": 4, 00:12:11.444 "num_base_bdevs_discovered": 4, 00:12:11.444 "num_base_bdevs_operational": 4, 00:12:11.444 "base_bdevs_list": [ 00:12:11.444 { 00:12:11.444 "name": "BaseBdev1", 00:12:11.444 "uuid": "a7cbac9d-54c4-45ef-977f-63bc53074087", 00:12:11.444 "is_configured": true, 00:12:11.444 "data_offset": 2048, 00:12:11.444 "data_size": 63488 00:12:11.444 }, 00:12:11.444 { 00:12:11.444 "name": "BaseBdev2", 00:12:11.444 "uuid": "13015d03-7246-4256-9f10-fbf4b91ce286", 00:12:11.444 "is_configured": true, 00:12:11.444 "data_offset": 2048, 00:12:11.444 "data_size": 63488 00:12:11.444 }, 00:12:11.444 { 00:12:11.444 "name": "BaseBdev3", 00:12:11.444 "uuid": "c954eaff-872e-4cb0-b64b-deca2c782895", 00:12:11.444 "is_configured": true, 00:12:11.444 "data_offset": 2048, 00:12:11.444 "data_size": 63488 00:12:11.444 }, 00:12:11.444 { 00:12:11.444 "name": "BaseBdev4", 00:12:11.444 "uuid": "1216c390-7b10-4b4a-a8a8-79b92ae40b22", 00:12:11.444 "is_configured": true, 00:12:11.444 "data_offset": 2048, 00:12:11.444 "data_size": 63488 00:12:11.444 } 00:12:11.444 ] 00:12:11.444 } 00:12:11.444 } 00:12:11.444 }' 00:12:11.444 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:11.444 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:11.444 BaseBdev2 00:12:11.444 BaseBdev3 00:12:11.444 BaseBdev4' 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.445 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.705 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.705 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.705 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.705 09:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:11.705 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.705 09:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.705 [2024-11-15 09:30:59.954085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:11.705 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.705 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:11.705 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:11.705 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:11.705 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:11.705 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:11.705 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:11.705 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.705 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.705 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.705 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.705 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:11.705 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.705 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.705 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.705 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.705 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.705 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.705 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.705 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.705 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.705 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.705 "name": "Existed_Raid", 00:12:11.705 "uuid": "5bf4cfec-6ccb-4727-a0bf-12de4c239d8d", 00:12:11.705 "strip_size_kb": 0, 00:12:11.705 "state": "online", 00:12:11.705 "raid_level": "raid1", 00:12:11.705 "superblock": true, 00:12:11.705 "num_base_bdevs": 4, 00:12:11.705 "num_base_bdevs_discovered": 3, 00:12:11.705 "num_base_bdevs_operational": 3, 00:12:11.705 "base_bdevs_list": [ 00:12:11.705 { 00:12:11.705 "name": null, 00:12:11.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.705 "is_configured": false, 00:12:11.705 "data_offset": 0, 00:12:11.705 "data_size": 63488 00:12:11.705 }, 00:12:11.705 { 00:12:11.705 "name": "BaseBdev2", 00:12:11.705 "uuid": "13015d03-7246-4256-9f10-fbf4b91ce286", 00:12:11.705 "is_configured": true, 00:12:11.705 "data_offset": 2048, 00:12:11.705 "data_size": 63488 00:12:11.705 }, 00:12:11.705 { 00:12:11.705 "name": "BaseBdev3", 00:12:11.705 "uuid": "c954eaff-872e-4cb0-b64b-deca2c782895", 00:12:11.705 "is_configured": true, 00:12:11.705 "data_offset": 2048, 00:12:11.705 "data_size": 63488 00:12:11.705 }, 00:12:11.705 { 00:12:11.705 "name": "BaseBdev4", 00:12:11.705 "uuid": "1216c390-7b10-4b4a-a8a8-79b92ae40b22", 00:12:11.705 "is_configured": true, 00:12:11.705 "data_offset": 2048, 00:12:11.705 "data_size": 63488 00:12:11.705 } 00:12:11.705 ] 00:12:11.705 }' 00:12:11.705 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.705 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.283 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:12.283 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:12.283 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.283 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.283 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.283 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:12.283 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.283 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:12.283 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:12.283 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:12.283 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.283 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.283 [2024-11-15 09:31:00.551748] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:12.283 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.283 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:12.283 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:12.283 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.283 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.283 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.283 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:12.283 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.283 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:12.283 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:12.283 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:12.283 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.283 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.283 [2024-11-15 09:31:00.706339] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:12.541 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.541 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:12.541 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:12.541 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.541 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.541 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.541 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:12.541 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.541 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:12.541 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:12.541 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:12.541 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.541 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.541 [2024-11-15 09:31:00.870202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:12.541 [2024-11-15 09:31:00.870409] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:12.541 [2024-11-15 09:31:00.968988] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:12.541 [2024-11-15 09:31:00.969147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:12.541 [2024-11-15 09:31:00.969195] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:12.541 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.541 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:12.541 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:12.541 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.541 09:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:12.541 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.541 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.541 09:31:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.799 BaseBdev2 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.799 [ 00:12:12.799 { 00:12:12.799 "name": "BaseBdev2", 00:12:12.799 "aliases": [ 00:12:12.799 "8be4ff61-48d8-4f3c-964c-714f656fb701" 00:12:12.799 ], 00:12:12.799 "product_name": "Malloc disk", 00:12:12.799 "block_size": 512, 00:12:12.799 "num_blocks": 65536, 00:12:12.799 "uuid": "8be4ff61-48d8-4f3c-964c-714f656fb701", 00:12:12.799 "assigned_rate_limits": { 00:12:12.799 "rw_ios_per_sec": 0, 00:12:12.799 "rw_mbytes_per_sec": 0, 00:12:12.799 "r_mbytes_per_sec": 0, 00:12:12.799 "w_mbytes_per_sec": 0 00:12:12.799 }, 00:12:12.799 "claimed": false, 00:12:12.799 "zoned": false, 00:12:12.799 "supported_io_types": { 00:12:12.799 "read": true, 00:12:12.799 "write": true, 00:12:12.799 "unmap": true, 00:12:12.799 "flush": true, 00:12:12.799 "reset": true, 00:12:12.799 "nvme_admin": false, 00:12:12.799 "nvme_io": false, 00:12:12.799 "nvme_io_md": false, 00:12:12.799 "write_zeroes": true, 00:12:12.799 "zcopy": true, 00:12:12.799 "get_zone_info": false, 00:12:12.799 "zone_management": false, 00:12:12.799 "zone_append": false, 00:12:12.799 "compare": false, 00:12:12.799 "compare_and_write": false, 00:12:12.799 "abort": true, 00:12:12.799 "seek_hole": false, 00:12:12.799 "seek_data": false, 00:12:12.799 "copy": true, 00:12:12.799 "nvme_iov_md": false 00:12:12.799 }, 00:12:12.799 "memory_domains": [ 00:12:12.799 { 00:12:12.799 "dma_device_id": "system", 00:12:12.799 "dma_device_type": 1 00:12:12.799 }, 00:12:12.799 { 00:12:12.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.799 "dma_device_type": 2 00:12:12.799 } 00:12:12.799 ], 00:12:12.799 "driver_specific": {} 00:12:12.799 } 00:12:12.799 ] 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.799 BaseBdev3 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.799 [ 00:12:12.799 { 00:12:12.799 "name": "BaseBdev3", 00:12:12.799 "aliases": [ 00:12:12.799 "c4b6166f-7d12-477b-a37d-329ab4464431" 00:12:12.799 ], 00:12:12.799 "product_name": "Malloc disk", 00:12:12.799 "block_size": 512, 00:12:12.799 "num_blocks": 65536, 00:12:12.799 "uuid": "c4b6166f-7d12-477b-a37d-329ab4464431", 00:12:12.799 "assigned_rate_limits": { 00:12:12.799 "rw_ios_per_sec": 0, 00:12:12.799 "rw_mbytes_per_sec": 0, 00:12:12.799 "r_mbytes_per_sec": 0, 00:12:12.799 "w_mbytes_per_sec": 0 00:12:12.799 }, 00:12:12.799 "claimed": false, 00:12:12.799 "zoned": false, 00:12:12.799 "supported_io_types": { 00:12:12.799 "read": true, 00:12:12.799 "write": true, 00:12:12.799 "unmap": true, 00:12:12.799 "flush": true, 00:12:12.799 "reset": true, 00:12:12.799 "nvme_admin": false, 00:12:12.799 "nvme_io": false, 00:12:12.799 "nvme_io_md": false, 00:12:12.799 "write_zeroes": true, 00:12:12.799 "zcopy": true, 00:12:12.799 "get_zone_info": false, 00:12:12.799 "zone_management": false, 00:12:12.799 "zone_append": false, 00:12:12.799 "compare": false, 00:12:12.799 "compare_and_write": false, 00:12:12.799 "abort": true, 00:12:12.799 "seek_hole": false, 00:12:12.799 "seek_data": false, 00:12:12.799 "copy": true, 00:12:12.799 "nvme_iov_md": false 00:12:12.799 }, 00:12:12.799 "memory_domains": [ 00:12:12.799 { 00:12:12.799 "dma_device_id": "system", 00:12:12.799 "dma_device_type": 1 00:12:12.799 }, 00:12:12.799 { 00:12:12.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.799 "dma_device_type": 2 00:12:12.799 } 00:12:12.799 ], 00:12:12.799 "driver_specific": {} 00:12:12.799 } 00:12:12.799 ] 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:12.799 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:12.800 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:12.800 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.800 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.800 BaseBdev4 00:12:12.800 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.800 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:12.800 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:12.800 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:12.800 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:12.800 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:12.800 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:12.800 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:12.800 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.800 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.800 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.800 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:12.800 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.800 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.800 [ 00:12:12.800 { 00:12:12.800 "name": "BaseBdev4", 00:12:12.800 "aliases": [ 00:12:12.800 "d30f0c4a-b8fe-4e84-9ccc-57272240b54a" 00:12:12.800 ], 00:12:12.800 "product_name": "Malloc disk", 00:12:12.800 "block_size": 512, 00:12:12.800 "num_blocks": 65536, 00:12:12.800 "uuid": "d30f0c4a-b8fe-4e84-9ccc-57272240b54a", 00:12:12.800 "assigned_rate_limits": { 00:12:12.800 "rw_ios_per_sec": 0, 00:12:12.800 "rw_mbytes_per_sec": 0, 00:12:12.800 "r_mbytes_per_sec": 0, 00:12:12.800 "w_mbytes_per_sec": 0 00:12:12.800 }, 00:12:12.800 "claimed": false, 00:12:12.800 "zoned": false, 00:12:12.800 "supported_io_types": { 00:12:12.800 "read": true, 00:12:12.800 "write": true, 00:12:12.800 "unmap": true, 00:12:12.800 "flush": true, 00:12:12.800 "reset": true, 00:12:12.800 "nvme_admin": false, 00:12:13.057 "nvme_io": false, 00:12:13.057 "nvme_io_md": false, 00:12:13.057 "write_zeroes": true, 00:12:13.057 "zcopy": true, 00:12:13.057 "get_zone_info": false, 00:12:13.057 "zone_management": false, 00:12:13.057 "zone_append": false, 00:12:13.057 "compare": false, 00:12:13.057 "compare_and_write": false, 00:12:13.057 "abort": true, 00:12:13.057 "seek_hole": false, 00:12:13.057 "seek_data": false, 00:12:13.057 "copy": true, 00:12:13.057 "nvme_iov_md": false 00:12:13.057 }, 00:12:13.057 "memory_domains": [ 00:12:13.057 { 00:12:13.057 "dma_device_id": "system", 00:12:13.057 "dma_device_type": 1 00:12:13.057 }, 00:12:13.057 { 00:12:13.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.057 "dma_device_type": 2 00:12:13.057 } 00:12:13.057 ], 00:12:13.057 "driver_specific": {} 00:12:13.057 } 00:12:13.057 ] 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.057 [2024-11-15 09:31:01.277990] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:13.057 [2024-11-15 09:31:01.278158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:13.057 [2024-11-15 09:31:01.278207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:13.057 [2024-11-15 09:31:01.280179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:13.057 [2024-11-15 09:31:01.280284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.057 "name": "Existed_Raid", 00:12:13.057 "uuid": "06a41cdc-41a8-40fb-af44-2f14f854499e", 00:12:13.057 "strip_size_kb": 0, 00:12:13.057 "state": "configuring", 00:12:13.057 "raid_level": "raid1", 00:12:13.057 "superblock": true, 00:12:13.057 "num_base_bdevs": 4, 00:12:13.057 "num_base_bdevs_discovered": 3, 00:12:13.057 "num_base_bdevs_operational": 4, 00:12:13.057 "base_bdevs_list": [ 00:12:13.057 { 00:12:13.057 "name": "BaseBdev1", 00:12:13.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.057 "is_configured": false, 00:12:13.057 "data_offset": 0, 00:12:13.057 "data_size": 0 00:12:13.057 }, 00:12:13.057 { 00:12:13.057 "name": "BaseBdev2", 00:12:13.057 "uuid": "8be4ff61-48d8-4f3c-964c-714f656fb701", 00:12:13.057 "is_configured": true, 00:12:13.057 "data_offset": 2048, 00:12:13.057 "data_size": 63488 00:12:13.057 }, 00:12:13.057 { 00:12:13.057 "name": "BaseBdev3", 00:12:13.057 "uuid": "c4b6166f-7d12-477b-a37d-329ab4464431", 00:12:13.057 "is_configured": true, 00:12:13.057 "data_offset": 2048, 00:12:13.057 "data_size": 63488 00:12:13.057 }, 00:12:13.057 { 00:12:13.057 "name": "BaseBdev4", 00:12:13.057 "uuid": "d30f0c4a-b8fe-4e84-9ccc-57272240b54a", 00:12:13.057 "is_configured": true, 00:12:13.057 "data_offset": 2048, 00:12:13.057 "data_size": 63488 00:12:13.057 } 00:12:13.057 ] 00:12:13.057 }' 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.057 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.624 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:13.624 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.624 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.624 [2024-11-15 09:31:01.801175] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:13.624 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.624 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:13.624 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.624 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.624 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.624 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.624 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.624 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.624 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.624 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.624 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.624 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.624 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.624 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.624 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.625 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.625 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.625 "name": "Existed_Raid", 00:12:13.625 "uuid": "06a41cdc-41a8-40fb-af44-2f14f854499e", 00:12:13.625 "strip_size_kb": 0, 00:12:13.625 "state": "configuring", 00:12:13.625 "raid_level": "raid1", 00:12:13.625 "superblock": true, 00:12:13.625 "num_base_bdevs": 4, 00:12:13.625 "num_base_bdevs_discovered": 2, 00:12:13.625 "num_base_bdevs_operational": 4, 00:12:13.625 "base_bdevs_list": [ 00:12:13.625 { 00:12:13.625 "name": "BaseBdev1", 00:12:13.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.625 "is_configured": false, 00:12:13.625 "data_offset": 0, 00:12:13.625 "data_size": 0 00:12:13.625 }, 00:12:13.625 { 00:12:13.625 "name": null, 00:12:13.625 "uuid": "8be4ff61-48d8-4f3c-964c-714f656fb701", 00:12:13.625 "is_configured": false, 00:12:13.625 "data_offset": 0, 00:12:13.625 "data_size": 63488 00:12:13.625 }, 00:12:13.625 { 00:12:13.625 "name": "BaseBdev3", 00:12:13.625 "uuid": "c4b6166f-7d12-477b-a37d-329ab4464431", 00:12:13.625 "is_configured": true, 00:12:13.625 "data_offset": 2048, 00:12:13.625 "data_size": 63488 00:12:13.625 }, 00:12:13.625 { 00:12:13.625 "name": "BaseBdev4", 00:12:13.625 "uuid": "d30f0c4a-b8fe-4e84-9ccc-57272240b54a", 00:12:13.625 "is_configured": true, 00:12:13.625 "data_offset": 2048, 00:12:13.625 "data_size": 63488 00:12:13.625 } 00:12:13.625 ] 00:12:13.625 }' 00:12:13.625 09:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.625 09:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.883 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.883 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.883 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:13.883 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.883 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.883 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:13.883 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:13.883 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.883 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.143 [2024-11-15 09:31:02.355238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:14.143 BaseBdev1 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.143 [ 00:12:14.143 { 00:12:14.143 "name": "BaseBdev1", 00:12:14.143 "aliases": [ 00:12:14.143 "9606297d-f214-44d4-9d1a-470e34369c38" 00:12:14.143 ], 00:12:14.143 "product_name": "Malloc disk", 00:12:14.143 "block_size": 512, 00:12:14.143 "num_blocks": 65536, 00:12:14.143 "uuid": "9606297d-f214-44d4-9d1a-470e34369c38", 00:12:14.143 "assigned_rate_limits": { 00:12:14.143 "rw_ios_per_sec": 0, 00:12:14.143 "rw_mbytes_per_sec": 0, 00:12:14.143 "r_mbytes_per_sec": 0, 00:12:14.143 "w_mbytes_per_sec": 0 00:12:14.143 }, 00:12:14.143 "claimed": true, 00:12:14.143 "claim_type": "exclusive_write", 00:12:14.143 "zoned": false, 00:12:14.143 "supported_io_types": { 00:12:14.143 "read": true, 00:12:14.143 "write": true, 00:12:14.143 "unmap": true, 00:12:14.143 "flush": true, 00:12:14.143 "reset": true, 00:12:14.143 "nvme_admin": false, 00:12:14.143 "nvme_io": false, 00:12:14.143 "nvme_io_md": false, 00:12:14.143 "write_zeroes": true, 00:12:14.143 "zcopy": true, 00:12:14.143 "get_zone_info": false, 00:12:14.143 "zone_management": false, 00:12:14.143 "zone_append": false, 00:12:14.143 "compare": false, 00:12:14.143 "compare_and_write": false, 00:12:14.143 "abort": true, 00:12:14.143 "seek_hole": false, 00:12:14.143 "seek_data": false, 00:12:14.143 "copy": true, 00:12:14.143 "nvme_iov_md": false 00:12:14.143 }, 00:12:14.143 "memory_domains": [ 00:12:14.143 { 00:12:14.143 "dma_device_id": "system", 00:12:14.143 "dma_device_type": 1 00:12:14.143 }, 00:12:14.143 { 00:12:14.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.143 "dma_device_type": 2 00:12:14.143 } 00:12:14.143 ], 00:12:14.143 "driver_specific": {} 00:12:14.143 } 00:12:14.143 ] 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.143 "name": "Existed_Raid", 00:12:14.143 "uuid": "06a41cdc-41a8-40fb-af44-2f14f854499e", 00:12:14.143 "strip_size_kb": 0, 00:12:14.143 "state": "configuring", 00:12:14.143 "raid_level": "raid1", 00:12:14.143 "superblock": true, 00:12:14.143 "num_base_bdevs": 4, 00:12:14.143 "num_base_bdevs_discovered": 3, 00:12:14.143 "num_base_bdevs_operational": 4, 00:12:14.143 "base_bdevs_list": [ 00:12:14.143 { 00:12:14.143 "name": "BaseBdev1", 00:12:14.143 "uuid": "9606297d-f214-44d4-9d1a-470e34369c38", 00:12:14.143 "is_configured": true, 00:12:14.143 "data_offset": 2048, 00:12:14.143 "data_size": 63488 00:12:14.143 }, 00:12:14.143 { 00:12:14.143 "name": null, 00:12:14.143 "uuid": "8be4ff61-48d8-4f3c-964c-714f656fb701", 00:12:14.143 "is_configured": false, 00:12:14.143 "data_offset": 0, 00:12:14.143 "data_size": 63488 00:12:14.143 }, 00:12:14.143 { 00:12:14.143 "name": "BaseBdev3", 00:12:14.143 "uuid": "c4b6166f-7d12-477b-a37d-329ab4464431", 00:12:14.143 "is_configured": true, 00:12:14.143 "data_offset": 2048, 00:12:14.143 "data_size": 63488 00:12:14.143 }, 00:12:14.143 { 00:12:14.143 "name": "BaseBdev4", 00:12:14.143 "uuid": "d30f0c4a-b8fe-4e84-9ccc-57272240b54a", 00:12:14.143 "is_configured": true, 00:12:14.143 "data_offset": 2048, 00:12:14.143 "data_size": 63488 00:12:14.143 } 00:12:14.143 ] 00:12:14.143 }' 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.143 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.429 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.429 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.429 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.429 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:14.429 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.717 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:14.717 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:14.717 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.717 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.717 [2024-11-15 09:31:02.886479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:14.717 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.717 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:14.717 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.717 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.717 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.717 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.717 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.717 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.717 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.717 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.717 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.717 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.717 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.717 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.717 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.717 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.717 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.717 "name": "Existed_Raid", 00:12:14.717 "uuid": "06a41cdc-41a8-40fb-af44-2f14f854499e", 00:12:14.717 "strip_size_kb": 0, 00:12:14.717 "state": "configuring", 00:12:14.717 "raid_level": "raid1", 00:12:14.717 "superblock": true, 00:12:14.717 "num_base_bdevs": 4, 00:12:14.717 "num_base_bdevs_discovered": 2, 00:12:14.717 "num_base_bdevs_operational": 4, 00:12:14.717 "base_bdevs_list": [ 00:12:14.717 { 00:12:14.717 "name": "BaseBdev1", 00:12:14.717 "uuid": "9606297d-f214-44d4-9d1a-470e34369c38", 00:12:14.717 "is_configured": true, 00:12:14.717 "data_offset": 2048, 00:12:14.717 "data_size": 63488 00:12:14.717 }, 00:12:14.717 { 00:12:14.717 "name": null, 00:12:14.717 "uuid": "8be4ff61-48d8-4f3c-964c-714f656fb701", 00:12:14.717 "is_configured": false, 00:12:14.717 "data_offset": 0, 00:12:14.717 "data_size": 63488 00:12:14.717 }, 00:12:14.717 { 00:12:14.717 "name": null, 00:12:14.717 "uuid": "c4b6166f-7d12-477b-a37d-329ab4464431", 00:12:14.717 "is_configured": false, 00:12:14.717 "data_offset": 0, 00:12:14.717 "data_size": 63488 00:12:14.717 }, 00:12:14.717 { 00:12:14.717 "name": "BaseBdev4", 00:12:14.717 "uuid": "d30f0c4a-b8fe-4e84-9ccc-57272240b54a", 00:12:14.717 "is_configured": true, 00:12:14.717 "data_offset": 2048, 00:12:14.717 "data_size": 63488 00:12:14.717 } 00:12:14.717 ] 00:12:14.717 }' 00:12:14.717 09:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.717 09:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.977 [2024-11-15 09:31:03.385591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.977 "name": "Existed_Raid", 00:12:14.977 "uuid": "06a41cdc-41a8-40fb-af44-2f14f854499e", 00:12:14.977 "strip_size_kb": 0, 00:12:14.977 "state": "configuring", 00:12:14.977 "raid_level": "raid1", 00:12:14.977 "superblock": true, 00:12:14.977 "num_base_bdevs": 4, 00:12:14.977 "num_base_bdevs_discovered": 3, 00:12:14.977 "num_base_bdevs_operational": 4, 00:12:14.977 "base_bdevs_list": [ 00:12:14.977 { 00:12:14.977 "name": "BaseBdev1", 00:12:14.977 "uuid": "9606297d-f214-44d4-9d1a-470e34369c38", 00:12:14.977 "is_configured": true, 00:12:14.977 "data_offset": 2048, 00:12:14.977 "data_size": 63488 00:12:14.977 }, 00:12:14.977 { 00:12:14.977 "name": null, 00:12:14.977 "uuid": "8be4ff61-48d8-4f3c-964c-714f656fb701", 00:12:14.977 "is_configured": false, 00:12:14.977 "data_offset": 0, 00:12:14.977 "data_size": 63488 00:12:14.977 }, 00:12:14.977 { 00:12:14.977 "name": "BaseBdev3", 00:12:14.977 "uuid": "c4b6166f-7d12-477b-a37d-329ab4464431", 00:12:14.977 "is_configured": true, 00:12:14.977 "data_offset": 2048, 00:12:14.977 "data_size": 63488 00:12:14.977 }, 00:12:14.977 { 00:12:14.977 "name": "BaseBdev4", 00:12:14.977 "uuid": "d30f0c4a-b8fe-4e84-9ccc-57272240b54a", 00:12:14.977 "is_configured": true, 00:12:14.977 "data_offset": 2048, 00:12:14.977 "data_size": 63488 00:12:14.977 } 00:12:14.977 ] 00:12:14.977 }' 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.977 09:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.572 09:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.572 09:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:15.572 09:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.573 09:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.573 09:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.573 09:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:15.573 09:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:15.573 09:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.573 09:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.573 [2024-11-15 09:31:03.908769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:15.573 09:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.573 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:15.573 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.573 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.573 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.573 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.573 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.573 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.573 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.573 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.573 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.573 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.573 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.573 09:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.573 09:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.832 09:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.832 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.832 "name": "Existed_Raid", 00:12:15.832 "uuid": "06a41cdc-41a8-40fb-af44-2f14f854499e", 00:12:15.832 "strip_size_kb": 0, 00:12:15.832 "state": "configuring", 00:12:15.832 "raid_level": "raid1", 00:12:15.832 "superblock": true, 00:12:15.832 "num_base_bdevs": 4, 00:12:15.832 "num_base_bdevs_discovered": 2, 00:12:15.832 "num_base_bdevs_operational": 4, 00:12:15.832 "base_bdevs_list": [ 00:12:15.832 { 00:12:15.832 "name": null, 00:12:15.832 "uuid": "9606297d-f214-44d4-9d1a-470e34369c38", 00:12:15.832 "is_configured": false, 00:12:15.832 "data_offset": 0, 00:12:15.832 "data_size": 63488 00:12:15.832 }, 00:12:15.832 { 00:12:15.832 "name": null, 00:12:15.832 "uuid": "8be4ff61-48d8-4f3c-964c-714f656fb701", 00:12:15.832 "is_configured": false, 00:12:15.832 "data_offset": 0, 00:12:15.832 "data_size": 63488 00:12:15.832 }, 00:12:15.832 { 00:12:15.832 "name": "BaseBdev3", 00:12:15.832 "uuid": "c4b6166f-7d12-477b-a37d-329ab4464431", 00:12:15.832 "is_configured": true, 00:12:15.832 "data_offset": 2048, 00:12:15.832 "data_size": 63488 00:12:15.832 }, 00:12:15.832 { 00:12:15.832 "name": "BaseBdev4", 00:12:15.832 "uuid": "d30f0c4a-b8fe-4e84-9ccc-57272240b54a", 00:12:15.832 "is_configured": true, 00:12:15.832 "data_offset": 2048, 00:12:15.832 "data_size": 63488 00:12:15.832 } 00:12:15.832 ] 00:12:15.832 }' 00:12:15.832 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.832 09:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.092 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.092 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:16.092 09:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.092 09:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.092 09:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.092 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:16.092 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:16.092 09:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.092 09:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.092 [2024-11-15 09:31:04.505602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:16.092 09:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.092 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:16.092 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.092 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.092 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.092 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.092 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.092 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.092 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.092 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.092 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.092 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.092 09:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.092 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.092 09:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.092 09:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.352 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.352 "name": "Existed_Raid", 00:12:16.352 "uuid": "06a41cdc-41a8-40fb-af44-2f14f854499e", 00:12:16.352 "strip_size_kb": 0, 00:12:16.352 "state": "configuring", 00:12:16.352 "raid_level": "raid1", 00:12:16.352 "superblock": true, 00:12:16.352 "num_base_bdevs": 4, 00:12:16.352 "num_base_bdevs_discovered": 3, 00:12:16.352 "num_base_bdevs_operational": 4, 00:12:16.352 "base_bdevs_list": [ 00:12:16.352 { 00:12:16.352 "name": null, 00:12:16.352 "uuid": "9606297d-f214-44d4-9d1a-470e34369c38", 00:12:16.352 "is_configured": false, 00:12:16.352 "data_offset": 0, 00:12:16.352 "data_size": 63488 00:12:16.352 }, 00:12:16.352 { 00:12:16.352 "name": "BaseBdev2", 00:12:16.352 "uuid": "8be4ff61-48d8-4f3c-964c-714f656fb701", 00:12:16.352 "is_configured": true, 00:12:16.352 "data_offset": 2048, 00:12:16.352 "data_size": 63488 00:12:16.352 }, 00:12:16.352 { 00:12:16.352 "name": "BaseBdev3", 00:12:16.352 "uuid": "c4b6166f-7d12-477b-a37d-329ab4464431", 00:12:16.352 "is_configured": true, 00:12:16.352 "data_offset": 2048, 00:12:16.352 "data_size": 63488 00:12:16.352 }, 00:12:16.352 { 00:12:16.352 "name": "BaseBdev4", 00:12:16.352 "uuid": "d30f0c4a-b8fe-4e84-9ccc-57272240b54a", 00:12:16.352 "is_configured": true, 00:12:16.352 "data_offset": 2048, 00:12:16.352 "data_size": 63488 00:12:16.352 } 00:12:16.352 ] 00:12:16.352 }' 00:12:16.352 09:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.352 09:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.610 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.610 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:16.610 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.610 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.610 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.610 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:16.610 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.610 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:16.610 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.610 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.610 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9606297d-f214-44d4-9d1a-470e34369c38 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.869 [2024-11-15 09:31:05.141268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:16.869 [2024-11-15 09:31:05.141536] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:16.869 [2024-11-15 09:31:05.141555] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:16.869 [2024-11-15 09:31:05.141870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:16.869 [2024-11-15 09:31:05.142044] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:16.869 [2024-11-15 09:31:05.142056] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:16.869 [2024-11-15 09:31:05.142214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.869 NewBaseBdev 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.869 [ 00:12:16.869 { 00:12:16.869 "name": "NewBaseBdev", 00:12:16.869 "aliases": [ 00:12:16.869 "9606297d-f214-44d4-9d1a-470e34369c38" 00:12:16.869 ], 00:12:16.869 "product_name": "Malloc disk", 00:12:16.869 "block_size": 512, 00:12:16.869 "num_blocks": 65536, 00:12:16.869 "uuid": "9606297d-f214-44d4-9d1a-470e34369c38", 00:12:16.869 "assigned_rate_limits": { 00:12:16.869 "rw_ios_per_sec": 0, 00:12:16.869 "rw_mbytes_per_sec": 0, 00:12:16.869 "r_mbytes_per_sec": 0, 00:12:16.869 "w_mbytes_per_sec": 0 00:12:16.869 }, 00:12:16.869 "claimed": true, 00:12:16.869 "claim_type": "exclusive_write", 00:12:16.869 "zoned": false, 00:12:16.869 "supported_io_types": { 00:12:16.869 "read": true, 00:12:16.869 "write": true, 00:12:16.869 "unmap": true, 00:12:16.869 "flush": true, 00:12:16.869 "reset": true, 00:12:16.869 "nvme_admin": false, 00:12:16.869 "nvme_io": false, 00:12:16.869 "nvme_io_md": false, 00:12:16.869 "write_zeroes": true, 00:12:16.869 "zcopy": true, 00:12:16.869 "get_zone_info": false, 00:12:16.869 "zone_management": false, 00:12:16.869 "zone_append": false, 00:12:16.869 "compare": false, 00:12:16.869 "compare_and_write": false, 00:12:16.869 "abort": true, 00:12:16.869 "seek_hole": false, 00:12:16.869 "seek_data": false, 00:12:16.869 "copy": true, 00:12:16.869 "nvme_iov_md": false 00:12:16.869 }, 00:12:16.869 "memory_domains": [ 00:12:16.869 { 00:12:16.869 "dma_device_id": "system", 00:12:16.869 "dma_device_type": 1 00:12:16.869 }, 00:12:16.869 { 00:12:16.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.869 "dma_device_type": 2 00:12:16.869 } 00:12:16.869 ], 00:12:16.869 "driver_specific": {} 00:12:16.869 } 00:12:16.869 ] 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.869 "name": "Existed_Raid", 00:12:16.869 "uuid": "06a41cdc-41a8-40fb-af44-2f14f854499e", 00:12:16.869 "strip_size_kb": 0, 00:12:16.869 "state": "online", 00:12:16.869 "raid_level": "raid1", 00:12:16.869 "superblock": true, 00:12:16.869 "num_base_bdevs": 4, 00:12:16.869 "num_base_bdevs_discovered": 4, 00:12:16.869 "num_base_bdevs_operational": 4, 00:12:16.869 "base_bdevs_list": [ 00:12:16.869 { 00:12:16.869 "name": "NewBaseBdev", 00:12:16.869 "uuid": "9606297d-f214-44d4-9d1a-470e34369c38", 00:12:16.869 "is_configured": true, 00:12:16.869 "data_offset": 2048, 00:12:16.869 "data_size": 63488 00:12:16.869 }, 00:12:16.869 { 00:12:16.869 "name": "BaseBdev2", 00:12:16.869 "uuid": "8be4ff61-48d8-4f3c-964c-714f656fb701", 00:12:16.869 "is_configured": true, 00:12:16.869 "data_offset": 2048, 00:12:16.869 "data_size": 63488 00:12:16.869 }, 00:12:16.869 { 00:12:16.869 "name": "BaseBdev3", 00:12:16.869 "uuid": "c4b6166f-7d12-477b-a37d-329ab4464431", 00:12:16.869 "is_configured": true, 00:12:16.869 "data_offset": 2048, 00:12:16.869 "data_size": 63488 00:12:16.869 }, 00:12:16.869 { 00:12:16.869 "name": "BaseBdev4", 00:12:16.869 "uuid": "d30f0c4a-b8fe-4e84-9ccc-57272240b54a", 00:12:16.869 "is_configured": true, 00:12:16.869 "data_offset": 2048, 00:12:16.869 "data_size": 63488 00:12:16.869 } 00:12:16.869 ] 00:12:16.869 }' 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.869 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.438 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:17.438 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:17.438 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:17.438 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:17.438 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:17.438 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:17.438 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:17.438 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.438 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.438 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:17.438 [2024-11-15 09:31:05.608943] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:17.438 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.438 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:17.438 "name": "Existed_Raid", 00:12:17.438 "aliases": [ 00:12:17.438 "06a41cdc-41a8-40fb-af44-2f14f854499e" 00:12:17.438 ], 00:12:17.438 "product_name": "Raid Volume", 00:12:17.438 "block_size": 512, 00:12:17.438 "num_blocks": 63488, 00:12:17.438 "uuid": "06a41cdc-41a8-40fb-af44-2f14f854499e", 00:12:17.438 "assigned_rate_limits": { 00:12:17.438 "rw_ios_per_sec": 0, 00:12:17.438 "rw_mbytes_per_sec": 0, 00:12:17.438 "r_mbytes_per_sec": 0, 00:12:17.438 "w_mbytes_per_sec": 0 00:12:17.438 }, 00:12:17.438 "claimed": false, 00:12:17.438 "zoned": false, 00:12:17.438 "supported_io_types": { 00:12:17.438 "read": true, 00:12:17.438 "write": true, 00:12:17.438 "unmap": false, 00:12:17.438 "flush": false, 00:12:17.438 "reset": true, 00:12:17.438 "nvme_admin": false, 00:12:17.438 "nvme_io": false, 00:12:17.438 "nvme_io_md": false, 00:12:17.438 "write_zeroes": true, 00:12:17.438 "zcopy": false, 00:12:17.438 "get_zone_info": false, 00:12:17.438 "zone_management": false, 00:12:17.438 "zone_append": false, 00:12:17.438 "compare": false, 00:12:17.438 "compare_and_write": false, 00:12:17.438 "abort": false, 00:12:17.438 "seek_hole": false, 00:12:17.438 "seek_data": false, 00:12:17.438 "copy": false, 00:12:17.438 "nvme_iov_md": false 00:12:17.438 }, 00:12:17.438 "memory_domains": [ 00:12:17.438 { 00:12:17.438 "dma_device_id": "system", 00:12:17.438 "dma_device_type": 1 00:12:17.438 }, 00:12:17.438 { 00:12:17.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.438 "dma_device_type": 2 00:12:17.438 }, 00:12:17.438 { 00:12:17.438 "dma_device_id": "system", 00:12:17.438 "dma_device_type": 1 00:12:17.438 }, 00:12:17.438 { 00:12:17.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.438 "dma_device_type": 2 00:12:17.438 }, 00:12:17.438 { 00:12:17.438 "dma_device_id": "system", 00:12:17.438 "dma_device_type": 1 00:12:17.438 }, 00:12:17.438 { 00:12:17.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.438 "dma_device_type": 2 00:12:17.438 }, 00:12:17.438 { 00:12:17.438 "dma_device_id": "system", 00:12:17.438 "dma_device_type": 1 00:12:17.438 }, 00:12:17.438 { 00:12:17.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.438 "dma_device_type": 2 00:12:17.438 } 00:12:17.438 ], 00:12:17.438 "driver_specific": { 00:12:17.438 "raid": { 00:12:17.438 "uuid": "06a41cdc-41a8-40fb-af44-2f14f854499e", 00:12:17.438 "strip_size_kb": 0, 00:12:17.438 "state": "online", 00:12:17.438 "raid_level": "raid1", 00:12:17.438 "superblock": true, 00:12:17.438 "num_base_bdevs": 4, 00:12:17.439 "num_base_bdevs_discovered": 4, 00:12:17.439 "num_base_bdevs_operational": 4, 00:12:17.439 "base_bdevs_list": [ 00:12:17.439 { 00:12:17.439 "name": "NewBaseBdev", 00:12:17.439 "uuid": "9606297d-f214-44d4-9d1a-470e34369c38", 00:12:17.439 "is_configured": true, 00:12:17.439 "data_offset": 2048, 00:12:17.439 "data_size": 63488 00:12:17.439 }, 00:12:17.439 { 00:12:17.439 "name": "BaseBdev2", 00:12:17.439 "uuid": "8be4ff61-48d8-4f3c-964c-714f656fb701", 00:12:17.439 "is_configured": true, 00:12:17.439 "data_offset": 2048, 00:12:17.439 "data_size": 63488 00:12:17.439 }, 00:12:17.439 { 00:12:17.439 "name": "BaseBdev3", 00:12:17.439 "uuid": "c4b6166f-7d12-477b-a37d-329ab4464431", 00:12:17.439 "is_configured": true, 00:12:17.439 "data_offset": 2048, 00:12:17.439 "data_size": 63488 00:12:17.439 }, 00:12:17.439 { 00:12:17.439 "name": "BaseBdev4", 00:12:17.439 "uuid": "d30f0c4a-b8fe-4e84-9ccc-57272240b54a", 00:12:17.439 "is_configured": true, 00:12:17.439 "data_offset": 2048, 00:12:17.439 "data_size": 63488 00:12:17.439 } 00:12:17.439 ] 00:12:17.439 } 00:12:17.439 } 00:12:17.439 }' 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:17.439 BaseBdev2 00:12:17.439 BaseBdev3 00:12:17.439 BaseBdev4' 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.439 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.699 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:17.699 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.699 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.699 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.699 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.699 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.699 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.699 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:17.699 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.699 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.699 [2024-11-15 09:31:05.956069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:17.699 [2024-11-15 09:31:05.956187] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:17.699 [2024-11-15 09:31:05.956315] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.699 [2024-11-15 09:31:05.956657] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:17.699 [2024-11-15 09:31:05.956721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:17.699 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.699 09:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74246 00:12:17.699 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 74246 ']' 00:12:17.699 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 74246 00:12:17.699 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:12:17.699 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:17.699 09:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74246 00:12:17.699 09:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:17.699 09:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:17.699 09:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74246' 00:12:17.699 killing process with pid 74246 00:12:17.699 09:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 74246 00:12:17.699 [2024-11-15 09:31:06.005786] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:17.699 09:31:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 74246 00:12:17.959 [2024-11-15 09:31:06.418152] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:19.338 09:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:19.338 00:12:19.338 real 0m12.018s 00:12:19.338 user 0m18.966s 00:12:19.338 sys 0m2.175s 00:12:19.338 09:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:19.338 09:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.338 ************************************ 00:12:19.338 END TEST raid_state_function_test_sb 00:12:19.338 ************************************ 00:12:19.338 09:31:07 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:19.338 09:31:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:19.338 09:31:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:19.338 09:31:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:19.338 ************************************ 00:12:19.338 START TEST raid_superblock_test 00:12:19.338 ************************************ 00:12:19.338 09:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 4 00:12:19.339 09:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:19.339 09:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:19.339 09:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:19.339 09:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:19.339 09:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:19.339 09:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:19.339 09:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:19.339 09:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:19.339 09:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:19.339 09:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:19.339 09:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:19.339 09:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:19.339 09:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:19.339 09:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:19.339 09:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:19.339 09:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74916 00:12:19.339 09:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:19.339 09:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74916 00:12:19.339 09:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 74916 ']' 00:12:19.339 09:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.339 09:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:19.339 09:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.339 09:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:19.339 09:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.339 [2024-11-15 09:31:07.781768] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:12:19.339 [2024-11-15 09:31:07.781989] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74916 ] 00:12:19.598 [2024-11-15 09:31:07.939977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.858 [2024-11-15 09:31:08.068312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.858 [2024-11-15 09:31:08.274576] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.858 [2024-11-15 09:31:08.274716] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.518 malloc1 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.518 [2024-11-15 09:31:08.787605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:20.518 [2024-11-15 09:31:08.787775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.518 [2024-11-15 09:31:08.787820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:20.518 [2024-11-15 09:31:08.787878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.518 [2024-11-15 09:31:08.790250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.518 [2024-11-15 09:31:08.790328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:20.518 pt1 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.518 malloc2 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.518 [2024-11-15 09:31:08.844574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:20.518 [2024-11-15 09:31:08.844722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.518 [2024-11-15 09:31:08.844778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:20.518 [2024-11-15 09:31:08.844837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.518 [2024-11-15 09:31:08.847273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.518 [2024-11-15 09:31:08.847358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:20.518 pt2 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.518 malloc3 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.518 [2024-11-15 09:31:08.920931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:20.518 [2024-11-15 09:31:08.921078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.518 [2024-11-15 09:31:08.921107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:20.518 [2024-11-15 09:31:08.921118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.518 [2024-11-15 09:31:08.923439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.518 [2024-11-15 09:31:08.923479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:20.518 pt3 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:20.518 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:20.519 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:20.519 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:20.519 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:20.519 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:20.519 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:20.519 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:20.519 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.519 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.519 malloc4 00:12:20.519 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.519 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:20.519 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.519 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.519 [2024-11-15 09:31:08.978847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:20.519 [2024-11-15 09:31:08.979021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.519 [2024-11-15 09:31:08.979065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:20.519 [2024-11-15 09:31:08.979102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.519 [2024-11-15 09:31:08.981609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.519 [2024-11-15 09:31:08.981696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:20.778 pt4 00:12:20.778 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.778 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:20.778 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:20.778 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:20.778 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.778 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.778 [2024-11-15 09:31:08.990854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:20.778 [2024-11-15 09:31:08.992956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:20.778 [2024-11-15 09:31:08.993077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:20.778 [2024-11-15 09:31:08.993162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:20.778 [2024-11-15 09:31:08.993446] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:20.778 [2024-11-15 09:31:08.993498] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:20.778 [2024-11-15 09:31:08.993807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:20.778 [2024-11-15 09:31:08.994089] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:20.778 [2024-11-15 09:31:08.994146] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:20.778 [2024-11-15 09:31:08.994362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.778 09:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.778 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:20.778 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.778 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.778 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.778 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.778 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.778 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.778 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.778 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.778 09:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.778 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.778 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.778 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.778 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.778 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.778 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.778 "name": "raid_bdev1", 00:12:20.778 "uuid": "43b4f325-4431-40fa-94c0-42bdc8ebcf44", 00:12:20.778 "strip_size_kb": 0, 00:12:20.778 "state": "online", 00:12:20.778 "raid_level": "raid1", 00:12:20.778 "superblock": true, 00:12:20.778 "num_base_bdevs": 4, 00:12:20.778 "num_base_bdevs_discovered": 4, 00:12:20.778 "num_base_bdevs_operational": 4, 00:12:20.778 "base_bdevs_list": [ 00:12:20.778 { 00:12:20.778 "name": "pt1", 00:12:20.778 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:20.778 "is_configured": true, 00:12:20.778 "data_offset": 2048, 00:12:20.778 "data_size": 63488 00:12:20.778 }, 00:12:20.778 { 00:12:20.778 "name": "pt2", 00:12:20.778 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:20.778 "is_configured": true, 00:12:20.778 "data_offset": 2048, 00:12:20.778 "data_size": 63488 00:12:20.778 }, 00:12:20.778 { 00:12:20.778 "name": "pt3", 00:12:20.778 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:20.778 "is_configured": true, 00:12:20.778 "data_offset": 2048, 00:12:20.778 "data_size": 63488 00:12:20.778 }, 00:12:20.778 { 00:12:20.778 "name": "pt4", 00:12:20.778 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:20.778 "is_configured": true, 00:12:20.778 "data_offset": 2048, 00:12:20.778 "data_size": 63488 00:12:20.778 } 00:12:20.778 ] 00:12:20.778 }' 00:12:20.778 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.778 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.037 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:21.037 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:21.037 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:21.037 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:21.037 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:21.037 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:21.037 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:21.037 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.037 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.037 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:21.037 [2024-11-15 09:31:09.430431] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.037 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.037 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:21.037 "name": "raid_bdev1", 00:12:21.037 "aliases": [ 00:12:21.037 "43b4f325-4431-40fa-94c0-42bdc8ebcf44" 00:12:21.037 ], 00:12:21.037 "product_name": "Raid Volume", 00:12:21.037 "block_size": 512, 00:12:21.037 "num_blocks": 63488, 00:12:21.037 "uuid": "43b4f325-4431-40fa-94c0-42bdc8ebcf44", 00:12:21.037 "assigned_rate_limits": { 00:12:21.037 "rw_ios_per_sec": 0, 00:12:21.037 "rw_mbytes_per_sec": 0, 00:12:21.037 "r_mbytes_per_sec": 0, 00:12:21.037 "w_mbytes_per_sec": 0 00:12:21.037 }, 00:12:21.037 "claimed": false, 00:12:21.037 "zoned": false, 00:12:21.037 "supported_io_types": { 00:12:21.037 "read": true, 00:12:21.037 "write": true, 00:12:21.037 "unmap": false, 00:12:21.037 "flush": false, 00:12:21.037 "reset": true, 00:12:21.037 "nvme_admin": false, 00:12:21.037 "nvme_io": false, 00:12:21.037 "nvme_io_md": false, 00:12:21.037 "write_zeroes": true, 00:12:21.037 "zcopy": false, 00:12:21.037 "get_zone_info": false, 00:12:21.037 "zone_management": false, 00:12:21.037 "zone_append": false, 00:12:21.037 "compare": false, 00:12:21.037 "compare_and_write": false, 00:12:21.037 "abort": false, 00:12:21.037 "seek_hole": false, 00:12:21.037 "seek_data": false, 00:12:21.037 "copy": false, 00:12:21.037 "nvme_iov_md": false 00:12:21.037 }, 00:12:21.037 "memory_domains": [ 00:12:21.037 { 00:12:21.037 "dma_device_id": "system", 00:12:21.037 "dma_device_type": 1 00:12:21.037 }, 00:12:21.037 { 00:12:21.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.037 "dma_device_type": 2 00:12:21.037 }, 00:12:21.037 { 00:12:21.037 "dma_device_id": "system", 00:12:21.037 "dma_device_type": 1 00:12:21.037 }, 00:12:21.037 { 00:12:21.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.037 "dma_device_type": 2 00:12:21.037 }, 00:12:21.037 { 00:12:21.037 "dma_device_id": "system", 00:12:21.037 "dma_device_type": 1 00:12:21.037 }, 00:12:21.037 { 00:12:21.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.037 "dma_device_type": 2 00:12:21.037 }, 00:12:21.037 { 00:12:21.037 "dma_device_id": "system", 00:12:21.037 "dma_device_type": 1 00:12:21.037 }, 00:12:21.037 { 00:12:21.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.037 "dma_device_type": 2 00:12:21.037 } 00:12:21.037 ], 00:12:21.037 "driver_specific": { 00:12:21.038 "raid": { 00:12:21.038 "uuid": "43b4f325-4431-40fa-94c0-42bdc8ebcf44", 00:12:21.038 "strip_size_kb": 0, 00:12:21.038 "state": "online", 00:12:21.038 "raid_level": "raid1", 00:12:21.038 "superblock": true, 00:12:21.038 "num_base_bdevs": 4, 00:12:21.038 "num_base_bdevs_discovered": 4, 00:12:21.038 "num_base_bdevs_operational": 4, 00:12:21.038 "base_bdevs_list": [ 00:12:21.038 { 00:12:21.038 "name": "pt1", 00:12:21.038 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:21.038 "is_configured": true, 00:12:21.038 "data_offset": 2048, 00:12:21.038 "data_size": 63488 00:12:21.038 }, 00:12:21.038 { 00:12:21.038 "name": "pt2", 00:12:21.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:21.038 "is_configured": true, 00:12:21.038 "data_offset": 2048, 00:12:21.038 "data_size": 63488 00:12:21.038 }, 00:12:21.038 { 00:12:21.038 "name": "pt3", 00:12:21.038 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:21.038 "is_configured": true, 00:12:21.038 "data_offset": 2048, 00:12:21.038 "data_size": 63488 00:12:21.038 }, 00:12:21.038 { 00:12:21.038 "name": "pt4", 00:12:21.038 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:21.038 "is_configured": true, 00:12:21.038 "data_offset": 2048, 00:12:21.038 "data_size": 63488 00:12:21.038 } 00:12:21.038 ] 00:12:21.038 } 00:12:21.038 } 00:12:21.038 }' 00:12:21.038 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:21.297 pt2 00:12:21.297 pt3 00:12:21.297 pt4' 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:21.297 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:21.298 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.298 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.298 [2024-11-15 09:31:09.745882] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=43b4f325-4431-40fa-94c0-42bdc8ebcf44 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 43b4f325-4431-40fa-94c0-42bdc8ebcf44 ']' 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.558 [2024-11-15 09:31:09.797490] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:21.558 [2024-11-15 09:31:09.797598] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:21.558 [2024-11-15 09:31:09.797699] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:21.558 [2024-11-15 09:31:09.797787] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:21.558 [2024-11-15 09:31:09.797802] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.558 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.558 [2024-11-15 09:31:09.961201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:21.558 [2024-11-15 09:31:09.963096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:21.558 [2024-11-15 09:31:09.963185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:21.558 [2024-11-15 09:31:09.963250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:21.558 [2024-11-15 09:31:09.963326] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:21.558 [2024-11-15 09:31:09.963413] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:21.558 [2024-11-15 09:31:09.963477] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:21.558 [2024-11-15 09:31:09.963567] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:21.558 [2024-11-15 09:31:09.963620] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:21.558 [2024-11-15 09:31:09.963655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:21.558 request: 00:12:21.558 { 00:12:21.558 "name": "raid_bdev1", 00:12:21.558 "raid_level": "raid1", 00:12:21.558 "base_bdevs": [ 00:12:21.558 "malloc1", 00:12:21.558 "malloc2", 00:12:21.558 "malloc3", 00:12:21.558 "malloc4" 00:12:21.558 ], 00:12:21.559 "superblock": false, 00:12:21.559 "method": "bdev_raid_create", 00:12:21.559 "req_id": 1 00:12:21.559 } 00:12:21.559 Got JSON-RPC error response 00:12:21.559 response: 00:12:21.559 { 00:12:21.559 "code": -17, 00:12:21.559 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:21.559 } 00:12:21.559 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:21.559 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:21.559 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:21.559 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:21.559 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:21.559 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:21.559 09:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.559 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.559 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.559 09:31:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.559 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:21.559 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:21.559 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:21.559 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.559 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.559 [2024-11-15 09:31:10.021077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:21.559 [2024-11-15 09:31:10.021197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.559 [2024-11-15 09:31:10.021236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:21.559 [2024-11-15 09:31:10.021276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.823 [2024-11-15 09:31:10.023526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.823 [2024-11-15 09:31:10.023607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:21.823 [2024-11-15 09:31:10.023708] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:21.823 [2024-11-15 09:31:10.023785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:21.823 pt1 00:12:21.823 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.823 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:21.823 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.823 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.823 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.823 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.823 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.823 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.823 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.823 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.823 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.823 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.823 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.823 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.823 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.823 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.823 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.823 "name": "raid_bdev1", 00:12:21.823 "uuid": "43b4f325-4431-40fa-94c0-42bdc8ebcf44", 00:12:21.823 "strip_size_kb": 0, 00:12:21.823 "state": "configuring", 00:12:21.823 "raid_level": "raid1", 00:12:21.823 "superblock": true, 00:12:21.823 "num_base_bdevs": 4, 00:12:21.823 "num_base_bdevs_discovered": 1, 00:12:21.823 "num_base_bdevs_operational": 4, 00:12:21.823 "base_bdevs_list": [ 00:12:21.823 { 00:12:21.823 "name": "pt1", 00:12:21.823 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:21.823 "is_configured": true, 00:12:21.823 "data_offset": 2048, 00:12:21.823 "data_size": 63488 00:12:21.823 }, 00:12:21.823 { 00:12:21.823 "name": null, 00:12:21.823 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:21.823 "is_configured": false, 00:12:21.823 "data_offset": 2048, 00:12:21.823 "data_size": 63488 00:12:21.823 }, 00:12:21.823 { 00:12:21.823 "name": null, 00:12:21.823 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:21.823 "is_configured": false, 00:12:21.823 "data_offset": 2048, 00:12:21.823 "data_size": 63488 00:12:21.823 }, 00:12:21.823 { 00:12:21.823 "name": null, 00:12:21.823 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:21.823 "is_configured": false, 00:12:21.823 "data_offset": 2048, 00:12:21.823 "data_size": 63488 00:12:21.823 } 00:12:21.823 ] 00:12:21.823 }' 00:12:21.823 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.823 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.081 [2024-11-15 09:31:10.468386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:22.081 [2024-11-15 09:31:10.468475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.081 [2024-11-15 09:31:10.468500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:22.081 [2024-11-15 09:31:10.468514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.081 [2024-11-15 09:31:10.469040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.081 [2024-11-15 09:31:10.469065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:22.081 [2024-11-15 09:31:10.469158] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:22.081 [2024-11-15 09:31:10.469194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:22.081 pt2 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.081 [2024-11-15 09:31:10.476376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.081 "name": "raid_bdev1", 00:12:22.081 "uuid": "43b4f325-4431-40fa-94c0-42bdc8ebcf44", 00:12:22.081 "strip_size_kb": 0, 00:12:22.081 "state": "configuring", 00:12:22.081 "raid_level": "raid1", 00:12:22.081 "superblock": true, 00:12:22.081 "num_base_bdevs": 4, 00:12:22.081 "num_base_bdevs_discovered": 1, 00:12:22.081 "num_base_bdevs_operational": 4, 00:12:22.081 "base_bdevs_list": [ 00:12:22.081 { 00:12:22.081 "name": "pt1", 00:12:22.081 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:22.081 "is_configured": true, 00:12:22.081 "data_offset": 2048, 00:12:22.081 "data_size": 63488 00:12:22.081 }, 00:12:22.081 { 00:12:22.081 "name": null, 00:12:22.081 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.081 "is_configured": false, 00:12:22.081 "data_offset": 0, 00:12:22.081 "data_size": 63488 00:12:22.081 }, 00:12:22.081 { 00:12:22.081 "name": null, 00:12:22.081 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:22.081 "is_configured": false, 00:12:22.081 "data_offset": 2048, 00:12:22.081 "data_size": 63488 00:12:22.081 }, 00:12:22.081 { 00:12:22.081 "name": null, 00:12:22.081 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:22.081 "is_configured": false, 00:12:22.081 "data_offset": 2048, 00:12:22.081 "data_size": 63488 00:12:22.081 } 00:12:22.081 ] 00:12:22.081 }' 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.081 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.649 [2024-11-15 09:31:10.919651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:22.649 [2024-11-15 09:31:10.919808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.649 [2024-11-15 09:31:10.919841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:22.649 [2024-11-15 09:31:10.919865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.649 [2024-11-15 09:31:10.920395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.649 [2024-11-15 09:31:10.920415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:22.649 [2024-11-15 09:31:10.920510] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:22.649 [2024-11-15 09:31:10.920533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:22.649 pt2 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.649 [2024-11-15 09:31:10.931584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:22.649 [2024-11-15 09:31:10.931639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.649 [2024-11-15 09:31:10.931658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:22.649 [2024-11-15 09:31:10.931667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.649 [2024-11-15 09:31:10.932091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.649 [2024-11-15 09:31:10.932111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:22.649 [2024-11-15 09:31:10.932185] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:22.649 [2024-11-15 09:31:10.932205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:22.649 pt3 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.649 [2024-11-15 09:31:10.943566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:22.649 [2024-11-15 09:31:10.943614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.649 [2024-11-15 09:31:10.943632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:22.649 [2024-11-15 09:31:10.943642] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.649 [2024-11-15 09:31:10.944080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.649 [2024-11-15 09:31:10.944098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:22.649 [2024-11-15 09:31:10.944165] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:22.649 [2024-11-15 09:31:10.944183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:22.649 [2024-11-15 09:31:10.944336] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:22.649 [2024-11-15 09:31:10.944352] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:22.649 [2024-11-15 09:31:10.944620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:22.649 [2024-11-15 09:31:10.944788] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:22.649 [2024-11-15 09:31:10.944814] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:22.649 [2024-11-15 09:31:10.944980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.649 pt4 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.649 09:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.649 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.649 "name": "raid_bdev1", 00:12:22.649 "uuid": "43b4f325-4431-40fa-94c0-42bdc8ebcf44", 00:12:22.649 "strip_size_kb": 0, 00:12:22.649 "state": "online", 00:12:22.649 "raid_level": "raid1", 00:12:22.649 "superblock": true, 00:12:22.649 "num_base_bdevs": 4, 00:12:22.649 "num_base_bdevs_discovered": 4, 00:12:22.649 "num_base_bdevs_operational": 4, 00:12:22.649 "base_bdevs_list": [ 00:12:22.649 { 00:12:22.649 "name": "pt1", 00:12:22.649 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:22.649 "is_configured": true, 00:12:22.649 "data_offset": 2048, 00:12:22.649 "data_size": 63488 00:12:22.649 }, 00:12:22.649 { 00:12:22.649 "name": "pt2", 00:12:22.649 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.649 "is_configured": true, 00:12:22.649 "data_offset": 2048, 00:12:22.649 "data_size": 63488 00:12:22.649 }, 00:12:22.649 { 00:12:22.649 "name": "pt3", 00:12:22.649 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:22.649 "is_configured": true, 00:12:22.649 "data_offset": 2048, 00:12:22.649 "data_size": 63488 00:12:22.649 }, 00:12:22.649 { 00:12:22.649 "name": "pt4", 00:12:22.649 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:22.649 "is_configured": true, 00:12:22.649 "data_offset": 2048, 00:12:22.649 "data_size": 63488 00:12:22.649 } 00:12:22.649 ] 00:12:22.649 }' 00:12:22.649 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.649 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.223 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:23.223 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:23.223 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:23.223 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:23.223 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:23.223 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:23.223 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:23.223 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.223 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.223 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:23.223 [2024-11-15 09:31:11.443208] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.223 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.223 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:23.223 "name": "raid_bdev1", 00:12:23.223 "aliases": [ 00:12:23.223 "43b4f325-4431-40fa-94c0-42bdc8ebcf44" 00:12:23.223 ], 00:12:23.223 "product_name": "Raid Volume", 00:12:23.223 "block_size": 512, 00:12:23.223 "num_blocks": 63488, 00:12:23.223 "uuid": "43b4f325-4431-40fa-94c0-42bdc8ebcf44", 00:12:23.223 "assigned_rate_limits": { 00:12:23.223 "rw_ios_per_sec": 0, 00:12:23.223 "rw_mbytes_per_sec": 0, 00:12:23.223 "r_mbytes_per_sec": 0, 00:12:23.223 "w_mbytes_per_sec": 0 00:12:23.223 }, 00:12:23.223 "claimed": false, 00:12:23.223 "zoned": false, 00:12:23.223 "supported_io_types": { 00:12:23.223 "read": true, 00:12:23.223 "write": true, 00:12:23.223 "unmap": false, 00:12:23.223 "flush": false, 00:12:23.223 "reset": true, 00:12:23.223 "nvme_admin": false, 00:12:23.223 "nvme_io": false, 00:12:23.223 "nvme_io_md": false, 00:12:23.223 "write_zeroes": true, 00:12:23.223 "zcopy": false, 00:12:23.224 "get_zone_info": false, 00:12:23.224 "zone_management": false, 00:12:23.224 "zone_append": false, 00:12:23.224 "compare": false, 00:12:23.224 "compare_and_write": false, 00:12:23.224 "abort": false, 00:12:23.224 "seek_hole": false, 00:12:23.224 "seek_data": false, 00:12:23.224 "copy": false, 00:12:23.224 "nvme_iov_md": false 00:12:23.224 }, 00:12:23.224 "memory_domains": [ 00:12:23.224 { 00:12:23.224 "dma_device_id": "system", 00:12:23.224 "dma_device_type": 1 00:12:23.224 }, 00:12:23.224 { 00:12:23.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.224 "dma_device_type": 2 00:12:23.224 }, 00:12:23.224 { 00:12:23.224 "dma_device_id": "system", 00:12:23.224 "dma_device_type": 1 00:12:23.224 }, 00:12:23.224 { 00:12:23.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.224 "dma_device_type": 2 00:12:23.224 }, 00:12:23.224 { 00:12:23.224 "dma_device_id": "system", 00:12:23.224 "dma_device_type": 1 00:12:23.224 }, 00:12:23.224 { 00:12:23.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.224 "dma_device_type": 2 00:12:23.224 }, 00:12:23.224 { 00:12:23.224 "dma_device_id": "system", 00:12:23.224 "dma_device_type": 1 00:12:23.224 }, 00:12:23.224 { 00:12:23.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.224 "dma_device_type": 2 00:12:23.224 } 00:12:23.224 ], 00:12:23.224 "driver_specific": { 00:12:23.224 "raid": { 00:12:23.224 "uuid": "43b4f325-4431-40fa-94c0-42bdc8ebcf44", 00:12:23.224 "strip_size_kb": 0, 00:12:23.224 "state": "online", 00:12:23.224 "raid_level": "raid1", 00:12:23.224 "superblock": true, 00:12:23.224 "num_base_bdevs": 4, 00:12:23.224 "num_base_bdevs_discovered": 4, 00:12:23.224 "num_base_bdevs_operational": 4, 00:12:23.224 "base_bdevs_list": [ 00:12:23.224 { 00:12:23.224 "name": "pt1", 00:12:23.224 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:23.224 "is_configured": true, 00:12:23.224 "data_offset": 2048, 00:12:23.224 "data_size": 63488 00:12:23.224 }, 00:12:23.224 { 00:12:23.224 "name": "pt2", 00:12:23.224 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:23.224 "is_configured": true, 00:12:23.224 "data_offset": 2048, 00:12:23.224 "data_size": 63488 00:12:23.224 }, 00:12:23.224 { 00:12:23.224 "name": "pt3", 00:12:23.224 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:23.224 "is_configured": true, 00:12:23.224 "data_offset": 2048, 00:12:23.224 "data_size": 63488 00:12:23.224 }, 00:12:23.224 { 00:12:23.224 "name": "pt4", 00:12:23.224 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:23.224 "is_configured": true, 00:12:23.224 "data_offset": 2048, 00:12:23.224 "data_size": 63488 00:12:23.224 } 00:12:23.224 ] 00:12:23.224 } 00:12:23.224 } 00:12:23.224 }' 00:12:23.224 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:23.224 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:23.224 pt2 00:12:23.224 pt3 00:12:23.224 pt4' 00:12:23.224 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.224 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:23.224 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.224 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.224 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:23.224 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.224 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.224 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.224 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.224 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.224 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.224 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:23.224 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.224 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.224 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.224 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.224 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.224 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.224 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.224 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:23.224 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.224 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.224 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.484 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.484 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.484 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.484 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.484 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.484 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:23.484 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.484 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.484 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.484 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.484 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.484 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:23.484 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.484 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:23.484 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.484 [2024-11-15 09:31:11.766529] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.484 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.484 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 43b4f325-4431-40fa-94c0-42bdc8ebcf44 '!=' 43b4f325-4431-40fa-94c0-42bdc8ebcf44 ']' 00:12:23.484 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:23.484 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:23.484 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:23.484 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:23.484 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.485 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.485 [2024-11-15 09:31:11.814193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:23.485 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.485 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:23.485 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.485 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.485 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.485 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.485 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.485 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.485 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.485 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.485 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.485 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.485 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.485 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.485 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.485 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.485 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.485 "name": "raid_bdev1", 00:12:23.485 "uuid": "43b4f325-4431-40fa-94c0-42bdc8ebcf44", 00:12:23.485 "strip_size_kb": 0, 00:12:23.485 "state": "online", 00:12:23.485 "raid_level": "raid1", 00:12:23.485 "superblock": true, 00:12:23.485 "num_base_bdevs": 4, 00:12:23.485 "num_base_bdevs_discovered": 3, 00:12:23.485 "num_base_bdevs_operational": 3, 00:12:23.485 "base_bdevs_list": [ 00:12:23.485 { 00:12:23.485 "name": null, 00:12:23.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.485 "is_configured": false, 00:12:23.485 "data_offset": 0, 00:12:23.485 "data_size": 63488 00:12:23.485 }, 00:12:23.485 { 00:12:23.485 "name": "pt2", 00:12:23.485 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:23.485 "is_configured": true, 00:12:23.485 "data_offset": 2048, 00:12:23.485 "data_size": 63488 00:12:23.485 }, 00:12:23.485 { 00:12:23.485 "name": "pt3", 00:12:23.485 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:23.485 "is_configured": true, 00:12:23.485 "data_offset": 2048, 00:12:23.485 "data_size": 63488 00:12:23.485 }, 00:12:23.485 { 00:12:23.485 "name": "pt4", 00:12:23.485 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:23.485 "is_configured": true, 00:12:23.485 "data_offset": 2048, 00:12:23.485 "data_size": 63488 00:12:23.485 } 00:12:23.485 ] 00:12:23.485 }' 00:12:23.485 09:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.485 09:31:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.053 [2024-11-15 09:31:12.305346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:24.053 [2024-11-15 09:31:12.305397] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:24.053 [2024-11-15 09:31:12.305490] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.053 [2024-11-15 09:31:12.305596] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:24.053 [2024-11-15 09:31:12.305606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.053 [2024-11-15 09:31:12.405165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:24.053 [2024-11-15 09:31:12.405335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.053 [2024-11-15 09:31:12.405375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:24.053 [2024-11-15 09:31:12.405405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.053 [2024-11-15 09:31:12.407869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.053 [2024-11-15 09:31:12.407972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:24.053 [2024-11-15 09:31:12.408103] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:24.053 [2024-11-15 09:31:12.408204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:24.053 pt2 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.053 "name": "raid_bdev1", 00:12:24.053 "uuid": "43b4f325-4431-40fa-94c0-42bdc8ebcf44", 00:12:24.053 "strip_size_kb": 0, 00:12:24.053 "state": "configuring", 00:12:24.053 "raid_level": "raid1", 00:12:24.053 "superblock": true, 00:12:24.053 "num_base_bdevs": 4, 00:12:24.053 "num_base_bdevs_discovered": 1, 00:12:24.053 "num_base_bdevs_operational": 3, 00:12:24.053 "base_bdevs_list": [ 00:12:24.053 { 00:12:24.053 "name": null, 00:12:24.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.053 "is_configured": false, 00:12:24.053 "data_offset": 2048, 00:12:24.053 "data_size": 63488 00:12:24.053 }, 00:12:24.053 { 00:12:24.053 "name": "pt2", 00:12:24.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:24.053 "is_configured": true, 00:12:24.053 "data_offset": 2048, 00:12:24.053 "data_size": 63488 00:12:24.053 }, 00:12:24.053 { 00:12:24.053 "name": null, 00:12:24.053 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:24.053 "is_configured": false, 00:12:24.053 "data_offset": 2048, 00:12:24.053 "data_size": 63488 00:12:24.053 }, 00:12:24.053 { 00:12:24.053 "name": null, 00:12:24.053 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:24.053 "is_configured": false, 00:12:24.053 "data_offset": 2048, 00:12:24.053 "data_size": 63488 00:12:24.053 } 00:12:24.053 ] 00:12:24.053 }' 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.053 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.622 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:24.622 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:24.622 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:24.622 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.622 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.622 [2024-11-15 09:31:12.876437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:24.622 [2024-11-15 09:31:12.876534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.622 [2024-11-15 09:31:12.876560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:24.622 [2024-11-15 09:31:12.876571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.622 [2024-11-15 09:31:12.877126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.622 [2024-11-15 09:31:12.877156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:24.622 [2024-11-15 09:31:12.877255] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:24.622 [2024-11-15 09:31:12.877281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:24.622 pt3 00:12:24.622 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.622 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:24.622 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.622 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.622 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.622 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.622 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:24.622 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.622 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.622 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.622 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.622 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.622 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.622 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.622 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.622 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.622 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.622 "name": "raid_bdev1", 00:12:24.622 "uuid": "43b4f325-4431-40fa-94c0-42bdc8ebcf44", 00:12:24.622 "strip_size_kb": 0, 00:12:24.622 "state": "configuring", 00:12:24.622 "raid_level": "raid1", 00:12:24.622 "superblock": true, 00:12:24.622 "num_base_bdevs": 4, 00:12:24.622 "num_base_bdevs_discovered": 2, 00:12:24.622 "num_base_bdevs_operational": 3, 00:12:24.622 "base_bdevs_list": [ 00:12:24.622 { 00:12:24.622 "name": null, 00:12:24.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.622 "is_configured": false, 00:12:24.622 "data_offset": 2048, 00:12:24.622 "data_size": 63488 00:12:24.622 }, 00:12:24.622 { 00:12:24.622 "name": "pt2", 00:12:24.622 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:24.622 "is_configured": true, 00:12:24.622 "data_offset": 2048, 00:12:24.622 "data_size": 63488 00:12:24.622 }, 00:12:24.622 { 00:12:24.622 "name": "pt3", 00:12:24.622 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:24.622 "is_configured": true, 00:12:24.622 "data_offset": 2048, 00:12:24.622 "data_size": 63488 00:12:24.622 }, 00:12:24.622 { 00:12:24.622 "name": null, 00:12:24.622 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:24.622 "is_configured": false, 00:12:24.622 "data_offset": 2048, 00:12:24.622 "data_size": 63488 00:12:24.622 } 00:12:24.622 ] 00:12:24.622 }' 00:12:24.622 09:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.622 09:31:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.880 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:24.880 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:24.880 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:24.880 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:24.880 09:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.880 09:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.880 [2024-11-15 09:31:13.335732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:24.880 [2024-11-15 09:31:13.335970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.880 [2024-11-15 09:31:13.336025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:24.880 [2024-11-15 09:31:13.336064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.880 [2024-11-15 09:31:13.336653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.880 [2024-11-15 09:31:13.336724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:24.880 [2024-11-15 09:31:13.336881] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:24.880 [2024-11-15 09:31:13.336953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:24.880 [2024-11-15 09:31:13.337156] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:24.880 [2024-11-15 09:31:13.337198] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:24.880 [2024-11-15 09:31:13.337505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:24.880 [2024-11-15 09:31:13.337713] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:24.880 [2024-11-15 09:31:13.337760] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:24.880 [2024-11-15 09:31:13.337987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.880 pt4 00:12:24.880 09:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.880 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:24.880 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.880 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.880 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.880 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.880 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:24.880 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.880 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.880 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.880 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.139 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.139 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.139 09:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.139 09:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.139 09:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.139 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.139 "name": "raid_bdev1", 00:12:25.139 "uuid": "43b4f325-4431-40fa-94c0-42bdc8ebcf44", 00:12:25.139 "strip_size_kb": 0, 00:12:25.139 "state": "online", 00:12:25.139 "raid_level": "raid1", 00:12:25.139 "superblock": true, 00:12:25.139 "num_base_bdevs": 4, 00:12:25.139 "num_base_bdevs_discovered": 3, 00:12:25.139 "num_base_bdevs_operational": 3, 00:12:25.139 "base_bdevs_list": [ 00:12:25.139 { 00:12:25.139 "name": null, 00:12:25.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.139 "is_configured": false, 00:12:25.139 "data_offset": 2048, 00:12:25.139 "data_size": 63488 00:12:25.139 }, 00:12:25.139 { 00:12:25.139 "name": "pt2", 00:12:25.139 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:25.139 "is_configured": true, 00:12:25.139 "data_offset": 2048, 00:12:25.139 "data_size": 63488 00:12:25.139 }, 00:12:25.139 { 00:12:25.139 "name": "pt3", 00:12:25.139 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:25.139 "is_configured": true, 00:12:25.139 "data_offset": 2048, 00:12:25.139 "data_size": 63488 00:12:25.139 }, 00:12:25.139 { 00:12:25.139 "name": "pt4", 00:12:25.139 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:25.139 "is_configured": true, 00:12:25.139 "data_offset": 2048, 00:12:25.139 "data_size": 63488 00:12:25.139 } 00:12:25.139 ] 00:12:25.139 }' 00:12:25.139 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.139 09:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.397 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:25.397 09:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.397 09:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.397 [2024-11-15 09:31:13.830814] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:25.397 [2024-11-15 09:31:13.831005] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:25.397 [2024-11-15 09:31:13.831119] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:25.397 [2024-11-15 09:31:13.831209] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:25.397 [2024-11-15 09:31:13.831224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:25.397 09:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.397 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.397 09:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.397 09:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.397 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:25.397 09:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.655 [2024-11-15 09:31:13.902695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:25.655 [2024-11-15 09:31:13.902800] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.655 [2024-11-15 09:31:13.902825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:25.655 [2024-11-15 09:31:13.902838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.655 [2024-11-15 09:31:13.905537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.655 [2024-11-15 09:31:13.905598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:25.655 [2024-11-15 09:31:13.905708] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:25.655 [2024-11-15 09:31:13.905767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:25.655 [2024-11-15 09:31:13.905957] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:25.655 [2024-11-15 09:31:13.905975] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:25.655 [2024-11-15 09:31:13.905993] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:25.655 [2024-11-15 09:31:13.906077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:25.655 [2024-11-15 09:31:13.906215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:25.655 pt1 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.655 "name": "raid_bdev1", 00:12:25.655 "uuid": "43b4f325-4431-40fa-94c0-42bdc8ebcf44", 00:12:25.655 "strip_size_kb": 0, 00:12:25.655 "state": "configuring", 00:12:25.655 "raid_level": "raid1", 00:12:25.655 "superblock": true, 00:12:25.655 "num_base_bdevs": 4, 00:12:25.655 "num_base_bdevs_discovered": 2, 00:12:25.655 "num_base_bdevs_operational": 3, 00:12:25.655 "base_bdevs_list": [ 00:12:25.655 { 00:12:25.655 "name": null, 00:12:25.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.655 "is_configured": false, 00:12:25.655 "data_offset": 2048, 00:12:25.655 "data_size": 63488 00:12:25.655 }, 00:12:25.655 { 00:12:25.655 "name": "pt2", 00:12:25.655 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:25.655 "is_configured": true, 00:12:25.655 "data_offset": 2048, 00:12:25.655 "data_size": 63488 00:12:25.655 }, 00:12:25.655 { 00:12:25.655 "name": "pt3", 00:12:25.655 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:25.655 "is_configured": true, 00:12:25.655 "data_offset": 2048, 00:12:25.655 "data_size": 63488 00:12:25.655 }, 00:12:25.655 { 00:12:25.655 "name": null, 00:12:25.655 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:25.655 "is_configured": false, 00:12:25.655 "data_offset": 2048, 00:12:25.655 "data_size": 63488 00:12:25.655 } 00:12:25.655 ] 00:12:25.655 }' 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.655 09:31:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.222 [2024-11-15 09:31:14.433834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:26.222 [2024-11-15 09:31:14.434047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.222 [2024-11-15 09:31:14.434097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:26.222 [2024-11-15 09:31:14.434135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.222 [2024-11-15 09:31:14.434692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.222 [2024-11-15 09:31:14.434761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:26.222 [2024-11-15 09:31:14.434910] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:26.222 [2024-11-15 09:31:14.434986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:26.222 [2024-11-15 09:31:14.435202] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:26.222 [2024-11-15 09:31:14.435244] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:26.222 [2024-11-15 09:31:14.435550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:26.222 [2024-11-15 09:31:14.435752] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:26.222 [2024-11-15 09:31:14.435797] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:26.222 [2024-11-15 09:31:14.436045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.222 pt4 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.222 "name": "raid_bdev1", 00:12:26.222 "uuid": "43b4f325-4431-40fa-94c0-42bdc8ebcf44", 00:12:26.222 "strip_size_kb": 0, 00:12:26.222 "state": "online", 00:12:26.222 "raid_level": "raid1", 00:12:26.222 "superblock": true, 00:12:26.222 "num_base_bdevs": 4, 00:12:26.222 "num_base_bdevs_discovered": 3, 00:12:26.222 "num_base_bdevs_operational": 3, 00:12:26.222 "base_bdevs_list": [ 00:12:26.222 { 00:12:26.222 "name": null, 00:12:26.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.222 "is_configured": false, 00:12:26.222 "data_offset": 2048, 00:12:26.222 "data_size": 63488 00:12:26.222 }, 00:12:26.222 { 00:12:26.222 "name": "pt2", 00:12:26.222 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.222 "is_configured": true, 00:12:26.222 "data_offset": 2048, 00:12:26.222 "data_size": 63488 00:12:26.222 }, 00:12:26.222 { 00:12:26.222 "name": "pt3", 00:12:26.222 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.222 "is_configured": true, 00:12:26.222 "data_offset": 2048, 00:12:26.222 "data_size": 63488 00:12:26.222 }, 00:12:26.222 { 00:12:26.222 "name": "pt4", 00:12:26.222 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:26.222 "is_configured": true, 00:12:26.222 "data_offset": 2048, 00:12:26.222 "data_size": 63488 00:12:26.222 } 00:12:26.222 ] 00:12:26.222 }' 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.222 09:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.482 09:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:26.482 09:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:26.482 09:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.482 09:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.482 09:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.741 09:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:26.741 09:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:26.741 09:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.741 09:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.741 09:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:26.741 [2024-11-15 09:31:14.965338] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:26.741 09:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.741 09:31:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 43b4f325-4431-40fa-94c0-42bdc8ebcf44 '!=' 43b4f325-4431-40fa-94c0-42bdc8ebcf44 ']' 00:12:26.741 09:31:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74916 00:12:26.741 09:31:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 74916 ']' 00:12:26.741 09:31:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 74916 00:12:26.741 09:31:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:12:26.741 09:31:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:26.741 09:31:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74916 00:12:26.741 09:31:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:26.741 killing process with pid 74916 00:12:26.741 09:31:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:26.741 09:31:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74916' 00:12:26.741 09:31:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 74916 00:12:26.741 [2024-11-15 09:31:15.052621] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:26.741 09:31:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 74916 00:12:26.741 [2024-11-15 09:31:15.052765] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:26.741 [2024-11-15 09:31:15.052873] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:26.741 [2024-11-15 09:31:15.052889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:27.310 [2024-11-15 09:31:15.522076] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:28.691 09:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:28.691 00:12:28.691 real 0m9.110s 00:12:28.691 user 0m14.178s 00:12:28.691 sys 0m1.734s 00:12:28.691 09:31:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:28.691 ************************************ 00:12:28.691 END TEST raid_superblock_test 00:12:28.691 ************************************ 00:12:28.691 09:31:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.691 09:31:16 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:28.691 09:31:16 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:28.691 09:31:16 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:28.691 09:31:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:28.691 ************************************ 00:12:28.691 START TEST raid_read_error_test 00:12:28.691 ************************************ 00:12:28.691 09:31:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 read 00:12:28.691 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:28.691 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:28.691 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:28.691 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:28.691 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:28.691 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:28.691 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:28.691 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:28.691 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:28.691 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:28.691 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:28.691 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:28.691 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:28.691 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:28.691 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:28.691 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:28.691 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:28.691 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:28.691 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:28.691 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:28.692 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:28.692 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:28.692 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:28.692 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:28.692 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:28.692 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:28.692 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:28.692 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jgT5QDqxz0 00:12:28.692 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75409 00:12:28.692 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75409 00:12:28.692 09:31:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:28.692 09:31:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 75409 ']' 00:12:28.692 09:31:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.692 09:31:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:28.692 09:31:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.692 09:31:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:28.692 09:31:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.692 [2024-11-15 09:31:16.991103] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:12:28.692 [2024-11-15 09:31:16.991350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75409 ] 00:12:28.951 [2024-11-15 09:31:17.176130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.951 [2024-11-15 09:31:17.313068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.211 [2024-11-15 09:31:17.559636] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.211 [2024-11-15 09:31:17.559787] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.472 09:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:29.472 09:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:29.472 09:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:29.472 09:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:29.472 09:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.472 09:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.734 BaseBdev1_malloc 00:12:29.734 09:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.734 09:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:29.734 09:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.734 09:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.734 true 00:12:29.734 09:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.734 09:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:29.734 09:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.734 09:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.734 [2024-11-15 09:31:17.960620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:29.734 [2024-11-15 09:31:17.960705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.734 [2024-11-15 09:31:17.960731] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:29.734 [2024-11-15 09:31:17.960745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.734 [2024-11-15 09:31:17.963395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.734 [2024-11-15 09:31:17.963446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:29.734 BaseBdev1 00:12:29.734 09:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.734 09:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:29.734 09:31:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:29.734 09:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.734 09:31:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.734 BaseBdev2_malloc 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.735 true 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.735 [2024-11-15 09:31:18.034318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:29.735 [2024-11-15 09:31:18.034414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.735 [2024-11-15 09:31:18.034439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:29.735 [2024-11-15 09:31:18.034455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.735 [2024-11-15 09:31:18.037103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.735 [2024-11-15 09:31:18.037155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:29.735 BaseBdev2 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.735 BaseBdev3_malloc 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.735 true 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.735 [2024-11-15 09:31:18.124292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:29.735 [2024-11-15 09:31:18.124486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.735 [2024-11-15 09:31:18.124555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:29.735 [2024-11-15 09:31:18.124608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.735 [2024-11-15 09:31:18.127277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.735 [2024-11-15 09:31:18.127371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:29.735 BaseBdev3 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.735 BaseBdev4_malloc 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.735 true 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.735 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.735 [2024-11-15 09:31:18.198425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:29.735 [2024-11-15 09:31:18.198599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.735 [2024-11-15 09:31:18.198646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:29.735 [2024-11-15 09:31:18.198704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.994 [2024-11-15 09:31:18.201411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.994 [2024-11-15 09:31:18.201522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:29.994 BaseBdev4 00:12:29.994 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.994 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:29.994 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.994 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.994 [2024-11-15 09:31:18.210534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:29.994 [2024-11-15 09:31:18.212725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:29.994 [2024-11-15 09:31:18.212819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:29.994 [2024-11-15 09:31:18.212914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:29.994 [2024-11-15 09:31:18.213194] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:29.994 [2024-11-15 09:31:18.213211] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:29.994 [2024-11-15 09:31:18.213516] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:29.994 [2024-11-15 09:31:18.213703] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:29.994 [2024-11-15 09:31:18.213713] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:29.994 [2024-11-15 09:31:18.213913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.994 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.994 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:29.994 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.994 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.994 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.994 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.994 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.994 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.994 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.995 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.995 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.995 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.995 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.995 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.995 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.995 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.995 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.995 "name": "raid_bdev1", 00:12:29.995 "uuid": "39a02db8-7ad6-4b80-824d-708ec51718e9", 00:12:29.995 "strip_size_kb": 0, 00:12:29.995 "state": "online", 00:12:29.995 "raid_level": "raid1", 00:12:29.995 "superblock": true, 00:12:29.995 "num_base_bdevs": 4, 00:12:29.995 "num_base_bdevs_discovered": 4, 00:12:29.995 "num_base_bdevs_operational": 4, 00:12:29.995 "base_bdevs_list": [ 00:12:29.995 { 00:12:29.995 "name": "BaseBdev1", 00:12:29.995 "uuid": "a7849e4d-422f-5c5d-af97-84042346e8c5", 00:12:29.995 "is_configured": true, 00:12:29.995 "data_offset": 2048, 00:12:29.995 "data_size": 63488 00:12:29.995 }, 00:12:29.995 { 00:12:29.995 "name": "BaseBdev2", 00:12:29.995 "uuid": "fff48ea8-bbd1-546a-bd0f-c686767c5c4c", 00:12:29.995 "is_configured": true, 00:12:29.995 "data_offset": 2048, 00:12:29.995 "data_size": 63488 00:12:29.995 }, 00:12:29.995 { 00:12:29.995 "name": "BaseBdev3", 00:12:29.995 "uuid": "d597d8a4-5173-5ab7-ba01-db470e6dd6b2", 00:12:29.995 "is_configured": true, 00:12:29.995 "data_offset": 2048, 00:12:29.995 "data_size": 63488 00:12:29.995 }, 00:12:29.995 { 00:12:29.995 "name": "BaseBdev4", 00:12:29.995 "uuid": "0d69e34c-680c-573c-b3bf-e76fb9d664db", 00:12:29.995 "is_configured": true, 00:12:29.995 "data_offset": 2048, 00:12:29.995 "data_size": 63488 00:12:29.995 } 00:12:29.995 ] 00:12:29.995 }' 00:12:29.995 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.995 09:31:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.254 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:30.254 09:31:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:30.512 [2024-11-15 09:31:18.790960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.451 "name": "raid_bdev1", 00:12:31.451 "uuid": "39a02db8-7ad6-4b80-824d-708ec51718e9", 00:12:31.451 "strip_size_kb": 0, 00:12:31.451 "state": "online", 00:12:31.451 "raid_level": "raid1", 00:12:31.451 "superblock": true, 00:12:31.451 "num_base_bdevs": 4, 00:12:31.451 "num_base_bdevs_discovered": 4, 00:12:31.451 "num_base_bdevs_operational": 4, 00:12:31.451 "base_bdevs_list": [ 00:12:31.451 { 00:12:31.451 "name": "BaseBdev1", 00:12:31.451 "uuid": "a7849e4d-422f-5c5d-af97-84042346e8c5", 00:12:31.451 "is_configured": true, 00:12:31.451 "data_offset": 2048, 00:12:31.451 "data_size": 63488 00:12:31.451 }, 00:12:31.451 { 00:12:31.451 "name": "BaseBdev2", 00:12:31.451 "uuid": "fff48ea8-bbd1-546a-bd0f-c686767c5c4c", 00:12:31.451 "is_configured": true, 00:12:31.451 "data_offset": 2048, 00:12:31.451 "data_size": 63488 00:12:31.451 }, 00:12:31.451 { 00:12:31.451 "name": "BaseBdev3", 00:12:31.451 "uuid": "d597d8a4-5173-5ab7-ba01-db470e6dd6b2", 00:12:31.451 "is_configured": true, 00:12:31.451 "data_offset": 2048, 00:12:31.451 "data_size": 63488 00:12:31.451 }, 00:12:31.451 { 00:12:31.451 "name": "BaseBdev4", 00:12:31.451 "uuid": "0d69e34c-680c-573c-b3bf-e76fb9d664db", 00:12:31.451 "is_configured": true, 00:12:31.451 "data_offset": 2048, 00:12:31.451 "data_size": 63488 00:12:31.451 } 00:12:31.451 ] 00:12:31.451 }' 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.451 09:31:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.022 09:31:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:32.022 09:31:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.022 09:31:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.022 [2024-11-15 09:31:20.221794] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:32.022 [2024-11-15 09:31:20.221971] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:32.022 [2024-11-15 09:31:20.225376] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.022 [2024-11-15 09:31:20.225512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.022 [2024-11-15 09:31:20.225684] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:32.022 [2024-11-15 09:31:20.225752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:32.022 09:31:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.022 09:31:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75409 00:12:32.022 09:31:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 75409 ']' 00:12:32.022 { 00:12:32.022 "results": [ 00:12:32.022 { 00:12:32.022 "job": "raid_bdev1", 00:12:32.022 "core_mask": "0x1", 00:12:32.022 "workload": "randrw", 00:12:32.022 "percentage": 50, 00:12:32.022 "status": "finished", 00:12:32.022 "queue_depth": 1, 00:12:32.022 "io_size": 131072, 00:12:32.022 "runtime": 1.431828, 00:12:32.022 "iops": 9272.76181217297, 00:12:32.022 "mibps": 1159.0952265216213, 00:12:32.022 "io_failed": 0, 00:12:32.022 "io_timeout": 0, 00:12:32.022 "avg_latency_us": 104.6604569809629, 00:12:32.022 "min_latency_us": 24.929257641921396, 00:12:32.022 "max_latency_us": 1903.1196506550218 00:12:32.022 } 00:12:32.022 ], 00:12:32.022 "core_count": 1 00:12:32.022 } 00:12:32.022 09:31:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 75409 00:12:32.022 09:31:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:12:32.022 09:31:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:32.022 09:31:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75409 00:12:32.022 09:31:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:32.022 09:31:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:32.022 09:31:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75409' 00:12:32.022 killing process with pid 75409 00:12:32.022 09:31:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 75409 00:12:32.022 09:31:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 75409 00:12:32.022 [2024-11-15 09:31:20.270681] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:32.283 [2024-11-15 09:31:20.657554] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:33.663 09:31:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:33.663 09:31:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jgT5QDqxz0 00:12:33.663 09:31:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:33.663 09:31:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:33.663 09:31:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:33.663 09:31:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:33.663 09:31:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:33.663 09:31:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:33.663 00:12:33.663 real 0m5.195s 00:12:33.663 user 0m6.117s 00:12:33.663 sys 0m0.663s 00:12:33.663 09:31:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:33.663 09:31:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.663 ************************************ 00:12:33.663 END TEST raid_read_error_test 00:12:33.663 ************************************ 00:12:33.663 09:31:22 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:33.663 09:31:22 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:33.663 09:31:22 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:33.663 09:31:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:33.924 ************************************ 00:12:33.924 START TEST raid_write_error_test 00:12:33.924 ************************************ 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 write 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.zwmbfpIyDL 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75560 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75560 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 75560 ']' 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:33.924 09:31:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.924 [2024-11-15 09:31:22.260779] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:12:33.924 [2024-11-15 09:31:22.260987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75560 ] 00:12:34.184 [2024-11-15 09:31:22.448876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.184 [2024-11-15 09:31:22.586911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.444 [2024-11-15 09:31:22.828171] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:34.444 [2024-11-15 09:31:22.828226] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.013 BaseBdev1_malloc 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.013 true 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.013 [2024-11-15 09:31:23.248098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:35.013 [2024-11-15 09:31:23.248181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.013 [2024-11-15 09:31:23.248208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:35.013 [2024-11-15 09:31:23.248221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.013 [2024-11-15 09:31:23.250790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.013 [2024-11-15 09:31:23.250842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:35.013 BaseBdev1 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.013 BaseBdev2_malloc 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.013 true 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.013 [2024-11-15 09:31:23.323299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:35.013 [2024-11-15 09:31:23.323459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.013 [2024-11-15 09:31:23.323480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:35.013 [2024-11-15 09:31:23.323492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.013 [2024-11-15 09:31:23.325657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.013 [2024-11-15 09:31:23.325703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:35.013 BaseBdev2 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.013 BaseBdev3_malloc 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.013 true 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.013 [2024-11-15 09:31:23.407304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:35.013 [2024-11-15 09:31:23.407388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.013 [2024-11-15 09:31:23.407412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:35.013 [2024-11-15 09:31:23.407425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.013 [2024-11-15 09:31:23.410007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.013 [2024-11-15 09:31:23.410056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:35.013 BaseBdev3 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.013 BaseBdev4_malloc 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:35.013 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.014 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.014 true 00:12:35.014 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.014 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:35.014 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.014 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.273 [2024-11-15 09:31:23.482327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:35.273 [2024-11-15 09:31:23.482412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.273 [2024-11-15 09:31:23.482437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:35.273 [2024-11-15 09:31:23.482450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.273 [2024-11-15 09:31:23.484987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.273 [2024-11-15 09:31:23.485129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:35.273 BaseBdev4 00:12:35.273 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.273 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:35.273 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.273 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.273 [2024-11-15 09:31:23.494384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:35.273 [2024-11-15 09:31:23.496559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:35.273 [2024-11-15 09:31:23.496748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:35.273 [2024-11-15 09:31:23.496832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:35.273 [2024-11-15 09:31:23.497138] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:35.273 [2024-11-15 09:31:23.497156] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:35.273 [2024-11-15 09:31:23.497463] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:35.273 [2024-11-15 09:31:23.497684] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:35.273 [2024-11-15 09:31:23.497695] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:35.273 [2024-11-15 09:31:23.497914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.273 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.273 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:35.273 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.273 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.273 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.273 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.273 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.273 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.273 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.273 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.273 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.273 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.273 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.273 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.273 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.273 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.273 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.273 "name": "raid_bdev1", 00:12:35.273 "uuid": "3ea2381c-b1d7-4eb1-b288-9d9abb002348", 00:12:35.273 "strip_size_kb": 0, 00:12:35.273 "state": "online", 00:12:35.273 "raid_level": "raid1", 00:12:35.273 "superblock": true, 00:12:35.273 "num_base_bdevs": 4, 00:12:35.273 "num_base_bdevs_discovered": 4, 00:12:35.273 "num_base_bdevs_operational": 4, 00:12:35.273 "base_bdevs_list": [ 00:12:35.273 { 00:12:35.273 "name": "BaseBdev1", 00:12:35.273 "uuid": "c6b132e4-cbad-5425-99de-5895b3494c52", 00:12:35.273 "is_configured": true, 00:12:35.273 "data_offset": 2048, 00:12:35.273 "data_size": 63488 00:12:35.273 }, 00:12:35.273 { 00:12:35.273 "name": "BaseBdev2", 00:12:35.273 "uuid": "e521f5ea-0b24-52c4-9c1a-1003a51af827", 00:12:35.273 "is_configured": true, 00:12:35.273 "data_offset": 2048, 00:12:35.273 "data_size": 63488 00:12:35.273 }, 00:12:35.273 { 00:12:35.273 "name": "BaseBdev3", 00:12:35.273 "uuid": "3c92b9df-eb80-5733-9807-3cecf3de8f02", 00:12:35.273 "is_configured": true, 00:12:35.273 "data_offset": 2048, 00:12:35.273 "data_size": 63488 00:12:35.273 }, 00:12:35.273 { 00:12:35.273 "name": "BaseBdev4", 00:12:35.273 "uuid": "b320ba2a-5842-5b02-bf8e-2d65d922a776", 00:12:35.273 "is_configured": true, 00:12:35.273 "data_offset": 2048, 00:12:35.273 "data_size": 63488 00:12:35.273 } 00:12:35.273 ] 00:12:35.273 }' 00:12:35.273 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.273 09:31:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.533 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:35.533 09:31:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:35.792 [2024-11-15 09:31:24.059041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:36.731 09:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:36.731 09:31:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.731 09:31:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.731 [2024-11-15 09:31:24.963652] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:36.731 [2024-11-15 09:31:24.963892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:36.731 [2024-11-15 09:31:24.964222] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:36.731 09:31:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.731 09:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:36.731 09:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:36.731 09:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:36.731 09:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:36.731 09:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:36.731 09:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.731 09:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.731 09:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.731 09:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.731 09:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:36.731 09:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.731 09:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.731 09:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.731 09:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.731 09:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.731 09:31:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.731 09:31:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.731 09:31:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.731 09:31:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.731 09:31:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.731 "name": "raid_bdev1", 00:12:36.731 "uuid": "3ea2381c-b1d7-4eb1-b288-9d9abb002348", 00:12:36.731 "strip_size_kb": 0, 00:12:36.731 "state": "online", 00:12:36.731 "raid_level": "raid1", 00:12:36.731 "superblock": true, 00:12:36.731 "num_base_bdevs": 4, 00:12:36.731 "num_base_bdevs_discovered": 3, 00:12:36.731 "num_base_bdevs_operational": 3, 00:12:36.731 "base_bdevs_list": [ 00:12:36.731 { 00:12:36.731 "name": null, 00:12:36.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.731 "is_configured": false, 00:12:36.731 "data_offset": 0, 00:12:36.731 "data_size": 63488 00:12:36.731 }, 00:12:36.731 { 00:12:36.731 "name": "BaseBdev2", 00:12:36.731 "uuid": "e521f5ea-0b24-52c4-9c1a-1003a51af827", 00:12:36.731 "is_configured": true, 00:12:36.731 "data_offset": 2048, 00:12:36.731 "data_size": 63488 00:12:36.731 }, 00:12:36.731 { 00:12:36.731 "name": "BaseBdev3", 00:12:36.731 "uuid": "3c92b9df-eb80-5733-9807-3cecf3de8f02", 00:12:36.731 "is_configured": true, 00:12:36.731 "data_offset": 2048, 00:12:36.732 "data_size": 63488 00:12:36.732 }, 00:12:36.732 { 00:12:36.732 "name": "BaseBdev4", 00:12:36.732 "uuid": "b320ba2a-5842-5b02-bf8e-2d65d922a776", 00:12:36.732 "is_configured": true, 00:12:36.732 "data_offset": 2048, 00:12:36.732 "data_size": 63488 00:12:36.732 } 00:12:36.732 ] 00:12:36.732 }' 00:12:36.732 09:31:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.732 09:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.301 09:31:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:37.301 09:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.301 09:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.301 [2024-11-15 09:31:25.474013] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:37.301 [2024-11-15 09:31:25.474167] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:37.301 [2024-11-15 09:31:25.477445] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:37.301 [2024-11-15 09:31:25.477578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.301 [2024-11-15 09:31:25.477705] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:37.301 [2024-11-15 09:31:25.477722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:37.301 { 00:12:37.301 "results": [ 00:12:37.301 { 00:12:37.301 "job": "raid_bdev1", 00:12:37.301 "core_mask": "0x1", 00:12:37.301 "workload": "randrw", 00:12:37.301 "percentage": 50, 00:12:37.301 "status": "finished", 00:12:37.301 "queue_depth": 1, 00:12:37.301 "io_size": 131072, 00:12:37.301 "runtime": 1.415505, 00:12:37.301 "iops": 9807.10064605918, 00:12:37.301 "mibps": 1225.8875807573975, 00:12:37.301 "io_failed": 0, 00:12:37.301 "io_timeout": 0, 00:12:37.301 "avg_latency_us": 98.609998055979, 00:12:37.301 "min_latency_us": 25.2646288209607, 00:12:37.301 "max_latency_us": 1831.5737991266376 00:12:37.301 } 00:12:37.301 ], 00:12:37.301 "core_count": 1 00:12:37.301 } 00:12:37.301 09:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.301 09:31:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75560 00:12:37.301 09:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 75560 ']' 00:12:37.301 09:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 75560 00:12:37.301 09:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:12:37.301 09:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:37.301 09:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75560 00:12:37.301 09:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:37.301 09:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:37.301 killing process with pid 75560 00:12:37.301 09:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75560' 00:12:37.301 09:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 75560 00:12:37.301 [2024-11-15 09:31:25.525116] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:37.302 09:31:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 75560 00:12:37.561 [2024-11-15 09:31:25.913682] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:38.942 09:31:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.zwmbfpIyDL 00:12:38.942 09:31:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:38.942 09:31:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:38.942 09:31:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:38.942 09:31:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:38.942 09:31:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:38.942 09:31:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:38.942 09:31:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:38.942 00:12:38.942 real 0m5.195s 00:12:38.942 user 0m6.096s 00:12:38.942 sys 0m0.714s 00:12:38.942 09:31:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:38.942 09:31:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.942 ************************************ 00:12:38.942 END TEST raid_write_error_test 00:12:38.942 ************************************ 00:12:38.942 09:31:27 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:38.942 09:31:27 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:38.942 09:31:27 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:38.942 09:31:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:12:38.942 09:31:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:38.942 09:31:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:38.942 ************************************ 00:12:38.942 START TEST raid_rebuild_test 00:12:38.942 ************************************ 00:12:38.942 09:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false false true 00:12:38.942 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:38.942 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:38.942 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:38.942 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:38.942 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:38.942 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:38.942 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:38.942 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:38.942 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:38.942 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:38.942 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:38.942 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:38.942 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:38.942 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:38.942 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:38.942 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:38.942 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:39.202 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:39.202 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:39.202 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:39.202 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:39.202 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:39.203 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:39.203 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75710 00:12:39.203 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75710 00:12:39.203 09:31:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:39.203 09:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 75710 ']' 00:12:39.203 09:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.203 09:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:39.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.203 09:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.203 09:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:39.203 09:31:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.203 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:39.203 Zero copy mechanism will not be used. 00:12:39.203 [2024-11-15 09:31:27.510244] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:12:39.203 [2024-11-15 09:31:27.510381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75710 ] 00:12:39.463 [2024-11-15 09:31:27.689597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.463 [2024-11-15 09:31:27.825428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.722 [2024-11-15 09:31:28.067094] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.722 [2024-11-15 09:31:28.067254] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.982 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:39.982 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:12:39.982 09:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:39.982 09:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:39.982 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.982 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.241 BaseBdev1_malloc 00:12:40.241 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.241 09:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:40.241 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.241 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.241 [2024-11-15 09:31:28.478016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:40.241 [2024-11-15 09:31:28.478211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.241 [2024-11-15 09:31:28.478248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:40.241 [2024-11-15 09:31:28.478263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.241 [2024-11-15 09:31:28.480941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.241 [2024-11-15 09:31:28.480992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:40.241 BaseBdev1 00:12:40.241 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.241 09:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:40.241 09:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:40.241 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.241 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.241 BaseBdev2_malloc 00:12:40.241 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.241 09:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:40.241 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.241 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.241 [2024-11-15 09:31:28.540683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:40.241 [2024-11-15 09:31:28.540774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.241 [2024-11-15 09:31:28.540799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:40.241 [2024-11-15 09:31:28.540812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.241 [2024-11-15 09:31:28.543428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.241 [2024-11-15 09:31:28.543605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:40.241 BaseBdev2 00:12:40.241 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.241 09:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:40.241 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.241 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.241 spare_malloc 00:12:40.241 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.241 09:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:40.241 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.241 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.241 spare_delay 00:12:40.241 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.241 09:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:40.241 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.241 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.241 [2024-11-15 09:31:28.641768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:40.241 [2024-11-15 09:31:28.641988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.241 [2024-11-15 09:31:28.642025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:40.241 [2024-11-15 09:31:28.642041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.241 [2024-11-15 09:31:28.644676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.242 [2024-11-15 09:31:28.644726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:40.242 spare 00:12:40.242 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.242 09:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:40.242 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.242 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.242 [2024-11-15 09:31:28.653798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.242 [2024-11-15 09:31:28.656086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:40.242 [2024-11-15 09:31:28.656197] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:40.242 [2024-11-15 09:31:28.656213] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:40.242 [2024-11-15 09:31:28.656535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:40.242 [2024-11-15 09:31:28.656726] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:40.242 [2024-11-15 09:31:28.656739] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:40.242 [2024-11-15 09:31:28.656966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.242 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.242 09:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:40.242 09:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.242 09:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.242 09:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.242 09:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.242 09:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:40.242 09:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.242 09:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.242 09:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.242 09:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.242 09:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.242 09:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.242 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.242 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.242 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.502 09:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.502 "name": "raid_bdev1", 00:12:40.502 "uuid": "faf50142-f98e-426a-9fc8-86fa62758ac8", 00:12:40.502 "strip_size_kb": 0, 00:12:40.502 "state": "online", 00:12:40.502 "raid_level": "raid1", 00:12:40.502 "superblock": false, 00:12:40.502 "num_base_bdevs": 2, 00:12:40.502 "num_base_bdevs_discovered": 2, 00:12:40.502 "num_base_bdevs_operational": 2, 00:12:40.502 "base_bdevs_list": [ 00:12:40.502 { 00:12:40.502 "name": "BaseBdev1", 00:12:40.502 "uuid": "ab8b0f77-d66f-59cf-b001-9b4b6b84b79b", 00:12:40.502 "is_configured": true, 00:12:40.502 "data_offset": 0, 00:12:40.502 "data_size": 65536 00:12:40.502 }, 00:12:40.502 { 00:12:40.502 "name": "BaseBdev2", 00:12:40.502 "uuid": "8145ebf6-1d43-57c4-9480-d4196e2b60a4", 00:12:40.502 "is_configured": true, 00:12:40.502 "data_offset": 0, 00:12:40.502 "data_size": 65536 00:12:40.502 } 00:12:40.502 ] 00:12:40.502 }' 00:12:40.502 09:31:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.502 09:31:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.761 09:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:40.761 09:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:40.761 09:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.761 09:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.761 [2024-11-15 09:31:29.129366] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:40.761 09:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.761 09:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:40.761 09:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.761 09:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:40.761 09:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.761 09:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.761 09:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.020 09:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:41.020 09:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:41.020 09:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:41.020 09:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:41.020 09:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:41.020 09:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:41.020 09:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:41.020 09:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:41.020 09:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:41.020 09:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:41.020 09:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:41.020 09:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:41.020 09:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:41.020 09:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:41.020 [2024-11-15 09:31:29.468540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:41.020 /dev/nbd0 00:12:41.279 09:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:41.279 09:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:41.279 09:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:41.279 09:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:12:41.279 09:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:41.279 09:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:41.279 09:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:41.279 09:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:12:41.279 09:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:41.279 09:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:41.279 09:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.279 1+0 records in 00:12:41.279 1+0 records out 00:12:41.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000691648 s, 5.9 MB/s 00:12:41.280 09:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.280 09:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:12:41.280 09:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.280 09:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:41.280 09:31:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:12:41.280 09:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.280 09:31:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:41.280 09:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:41.280 09:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:41.280 09:31:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:46.555 65536+0 records in 00:12:46.555 65536+0 records out 00:12:46.555 33554432 bytes (34 MB, 32 MiB) copied, 5.01478 s, 6.7 MB/s 00:12:46.555 09:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:46.555 09:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:46.555 09:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:46.555 09:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:46.555 09:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:46.555 09:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.555 09:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:46.555 [2024-11-15 09:31:34.764110] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.555 09:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:46.555 09:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:46.555 09:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:46.555 09:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.555 09:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.555 09:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:46.555 09:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:46.555 09:31:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.555 09:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:46.556 09:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.556 09:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.556 [2024-11-15 09:31:34.796318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:46.556 09:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.556 09:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:46.556 09:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.556 09:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.556 09:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.556 09:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.556 09:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:46.556 09:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.556 09:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.556 09:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.556 09:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.556 09:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.556 09:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.556 09:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.556 09:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.556 09:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.556 09:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.556 "name": "raid_bdev1", 00:12:46.556 "uuid": "faf50142-f98e-426a-9fc8-86fa62758ac8", 00:12:46.556 "strip_size_kb": 0, 00:12:46.556 "state": "online", 00:12:46.556 "raid_level": "raid1", 00:12:46.556 "superblock": false, 00:12:46.556 "num_base_bdevs": 2, 00:12:46.556 "num_base_bdevs_discovered": 1, 00:12:46.556 "num_base_bdevs_operational": 1, 00:12:46.556 "base_bdevs_list": [ 00:12:46.556 { 00:12:46.556 "name": null, 00:12:46.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.556 "is_configured": false, 00:12:46.556 "data_offset": 0, 00:12:46.556 "data_size": 65536 00:12:46.556 }, 00:12:46.556 { 00:12:46.556 "name": "BaseBdev2", 00:12:46.556 "uuid": "8145ebf6-1d43-57c4-9480-d4196e2b60a4", 00:12:46.556 "is_configured": true, 00:12:46.556 "data_offset": 0, 00:12:46.556 "data_size": 65536 00:12:46.556 } 00:12:46.556 ] 00:12:46.556 }' 00:12:46.556 09:31:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.556 09:31:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.815 09:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:46.815 09:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.815 09:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.815 [2024-11-15 09:31:35.243873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:46.815 [2024-11-15 09:31:35.261221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:46.815 09:31:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.815 09:31:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:46.815 [2024-11-15 09:31:35.263389] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.215 "name": "raid_bdev1", 00:12:48.215 "uuid": "faf50142-f98e-426a-9fc8-86fa62758ac8", 00:12:48.215 "strip_size_kb": 0, 00:12:48.215 "state": "online", 00:12:48.215 "raid_level": "raid1", 00:12:48.215 "superblock": false, 00:12:48.215 "num_base_bdevs": 2, 00:12:48.215 "num_base_bdevs_discovered": 2, 00:12:48.215 "num_base_bdevs_operational": 2, 00:12:48.215 "process": { 00:12:48.215 "type": "rebuild", 00:12:48.215 "target": "spare", 00:12:48.215 "progress": { 00:12:48.215 "blocks": 20480, 00:12:48.215 "percent": 31 00:12:48.215 } 00:12:48.215 }, 00:12:48.215 "base_bdevs_list": [ 00:12:48.215 { 00:12:48.215 "name": "spare", 00:12:48.215 "uuid": "28fe619c-941e-5b68-bfce-d2bd29288be4", 00:12:48.215 "is_configured": true, 00:12:48.215 "data_offset": 0, 00:12:48.215 "data_size": 65536 00:12:48.215 }, 00:12:48.215 { 00:12:48.215 "name": "BaseBdev2", 00:12:48.215 "uuid": "8145ebf6-1d43-57c4-9480-d4196e2b60a4", 00:12:48.215 "is_configured": true, 00:12:48.215 "data_offset": 0, 00:12:48.215 "data_size": 65536 00:12:48.215 } 00:12:48.215 ] 00:12:48.215 }' 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.215 [2024-11-15 09:31:36.426994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:48.215 [2024-11-15 09:31:36.469352] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:48.215 [2024-11-15 09:31:36.469521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.215 [2024-11-15 09:31:36.469559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:48.215 [2024-11-15 09:31:36.469583] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.215 "name": "raid_bdev1", 00:12:48.215 "uuid": "faf50142-f98e-426a-9fc8-86fa62758ac8", 00:12:48.215 "strip_size_kb": 0, 00:12:48.215 "state": "online", 00:12:48.215 "raid_level": "raid1", 00:12:48.215 "superblock": false, 00:12:48.215 "num_base_bdevs": 2, 00:12:48.215 "num_base_bdevs_discovered": 1, 00:12:48.215 "num_base_bdevs_operational": 1, 00:12:48.215 "base_bdevs_list": [ 00:12:48.215 { 00:12:48.215 "name": null, 00:12:48.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.215 "is_configured": false, 00:12:48.215 "data_offset": 0, 00:12:48.215 "data_size": 65536 00:12:48.215 }, 00:12:48.215 { 00:12:48.215 "name": "BaseBdev2", 00:12:48.215 "uuid": "8145ebf6-1d43-57c4-9480-d4196e2b60a4", 00:12:48.215 "is_configured": true, 00:12:48.215 "data_offset": 0, 00:12:48.215 "data_size": 65536 00:12:48.215 } 00:12:48.215 ] 00:12:48.215 }' 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.215 09:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.785 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:48.785 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.785 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:48.786 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:48.786 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.786 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.786 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.786 09:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.786 09:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.786 09:31:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.786 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.786 "name": "raid_bdev1", 00:12:48.786 "uuid": "faf50142-f98e-426a-9fc8-86fa62758ac8", 00:12:48.786 "strip_size_kb": 0, 00:12:48.786 "state": "online", 00:12:48.786 "raid_level": "raid1", 00:12:48.786 "superblock": false, 00:12:48.786 "num_base_bdevs": 2, 00:12:48.786 "num_base_bdevs_discovered": 1, 00:12:48.786 "num_base_bdevs_operational": 1, 00:12:48.786 "base_bdevs_list": [ 00:12:48.786 { 00:12:48.786 "name": null, 00:12:48.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.786 "is_configured": false, 00:12:48.786 "data_offset": 0, 00:12:48.786 "data_size": 65536 00:12:48.786 }, 00:12:48.786 { 00:12:48.786 "name": "BaseBdev2", 00:12:48.786 "uuid": "8145ebf6-1d43-57c4-9480-d4196e2b60a4", 00:12:48.786 "is_configured": true, 00:12:48.786 "data_offset": 0, 00:12:48.786 "data_size": 65536 00:12:48.786 } 00:12:48.786 ] 00:12:48.786 }' 00:12:48.786 09:31:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.786 09:31:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:48.786 09:31:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.786 09:31:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:48.786 09:31:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:48.786 09:31:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.786 09:31:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.786 [2024-11-15 09:31:37.084604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:48.786 [2024-11-15 09:31:37.100756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:48.786 09:31:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.786 09:31:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:48.786 [2024-11-15 09:31:37.102677] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:49.725 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.725 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.725 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.725 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.725 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.725 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.725 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.725 09:31:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.725 09:31:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.725 09:31:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.725 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.725 "name": "raid_bdev1", 00:12:49.725 "uuid": "faf50142-f98e-426a-9fc8-86fa62758ac8", 00:12:49.725 "strip_size_kb": 0, 00:12:49.725 "state": "online", 00:12:49.725 "raid_level": "raid1", 00:12:49.725 "superblock": false, 00:12:49.725 "num_base_bdevs": 2, 00:12:49.725 "num_base_bdevs_discovered": 2, 00:12:49.725 "num_base_bdevs_operational": 2, 00:12:49.725 "process": { 00:12:49.725 "type": "rebuild", 00:12:49.725 "target": "spare", 00:12:49.725 "progress": { 00:12:49.725 "blocks": 20480, 00:12:49.725 "percent": 31 00:12:49.725 } 00:12:49.725 }, 00:12:49.725 "base_bdevs_list": [ 00:12:49.725 { 00:12:49.725 "name": "spare", 00:12:49.725 "uuid": "28fe619c-941e-5b68-bfce-d2bd29288be4", 00:12:49.725 "is_configured": true, 00:12:49.725 "data_offset": 0, 00:12:49.725 "data_size": 65536 00:12:49.725 }, 00:12:49.725 { 00:12:49.725 "name": "BaseBdev2", 00:12:49.725 "uuid": "8145ebf6-1d43-57c4-9480-d4196e2b60a4", 00:12:49.725 "is_configured": true, 00:12:49.725 "data_offset": 0, 00:12:49.725 "data_size": 65536 00:12:49.725 } 00:12:49.725 ] 00:12:49.725 }' 00:12:49.725 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.985 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:49.985 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.985 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:49.985 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:49.985 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:49.985 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:49.985 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:49.985 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=392 00:12:49.985 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:49.985 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.985 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.985 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.985 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.985 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.985 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.985 09:31:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.985 09:31:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.985 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.985 09:31:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.985 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.985 "name": "raid_bdev1", 00:12:49.985 "uuid": "faf50142-f98e-426a-9fc8-86fa62758ac8", 00:12:49.985 "strip_size_kb": 0, 00:12:49.985 "state": "online", 00:12:49.985 "raid_level": "raid1", 00:12:49.985 "superblock": false, 00:12:49.985 "num_base_bdevs": 2, 00:12:49.985 "num_base_bdevs_discovered": 2, 00:12:49.985 "num_base_bdevs_operational": 2, 00:12:49.985 "process": { 00:12:49.985 "type": "rebuild", 00:12:49.985 "target": "spare", 00:12:49.985 "progress": { 00:12:49.985 "blocks": 22528, 00:12:49.985 "percent": 34 00:12:49.985 } 00:12:49.985 }, 00:12:49.985 "base_bdevs_list": [ 00:12:49.985 { 00:12:49.985 "name": "spare", 00:12:49.985 "uuid": "28fe619c-941e-5b68-bfce-d2bd29288be4", 00:12:49.985 "is_configured": true, 00:12:49.985 "data_offset": 0, 00:12:49.985 "data_size": 65536 00:12:49.985 }, 00:12:49.985 { 00:12:49.985 "name": "BaseBdev2", 00:12:49.985 "uuid": "8145ebf6-1d43-57c4-9480-d4196e2b60a4", 00:12:49.985 "is_configured": true, 00:12:49.985 "data_offset": 0, 00:12:49.985 "data_size": 65536 00:12:49.985 } 00:12:49.985 ] 00:12:49.985 }' 00:12:49.985 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.985 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:49.985 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.985 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:49.985 09:31:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:50.924 09:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:50.924 09:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.924 09:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.924 09:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.924 09:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.924 09:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.924 09:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.924 09:31:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.924 09:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.924 09:31:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.183 09:31:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.183 09:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.183 "name": "raid_bdev1", 00:12:51.183 "uuid": "faf50142-f98e-426a-9fc8-86fa62758ac8", 00:12:51.183 "strip_size_kb": 0, 00:12:51.183 "state": "online", 00:12:51.183 "raid_level": "raid1", 00:12:51.183 "superblock": false, 00:12:51.183 "num_base_bdevs": 2, 00:12:51.183 "num_base_bdevs_discovered": 2, 00:12:51.183 "num_base_bdevs_operational": 2, 00:12:51.183 "process": { 00:12:51.183 "type": "rebuild", 00:12:51.183 "target": "spare", 00:12:51.183 "progress": { 00:12:51.183 "blocks": 45056, 00:12:51.183 "percent": 68 00:12:51.183 } 00:12:51.183 }, 00:12:51.183 "base_bdevs_list": [ 00:12:51.183 { 00:12:51.183 "name": "spare", 00:12:51.183 "uuid": "28fe619c-941e-5b68-bfce-d2bd29288be4", 00:12:51.183 "is_configured": true, 00:12:51.183 "data_offset": 0, 00:12:51.183 "data_size": 65536 00:12:51.183 }, 00:12:51.183 { 00:12:51.183 "name": "BaseBdev2", 00:12:51.183 "uuid": "8145ebf6-1d43-57c4-9480-d4196e2b60a4", 00:12:51.183 "is_configured": true, 00:12:51.183 "data_offset": 0, 00:12:51.183 "data_size": 65536 00:12:51.183 } 00:12:51.183 ] 00:12:51.183 }' 00:12:51.183 09:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.183 09:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:51.183 09:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.183 09:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:51.183 09:31:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:52.122 [2024-11-15 09:31:40.317944] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:52.122 [2024-11-15 09:31:40.318026] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:52.122 [2024-11-15 09:31:40.318074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.122 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:52.122 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.122 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.122 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.122 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.122 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.122 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.122 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.122 09:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.122 09:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.122 09:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.381 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.381 "name": "raid_bdev1", 00:12:52.381 "uuid": "faf50142-f98e-426a-9fc8-86fa62758ac8", 00:12:52.381 "strip_size_kb": 0, 00:12:52.381 "state": "online", 00:12:52.381 "raid_level": "raid1", 00:12:52.381 "superblock": false, 00:12:52.381 "num_base_bdevs": 2, 00:12:52.381 "num_base_bdevs_discovered": 2, 00:12:52.381 "num_base_bdevs_operational": 2, 00:12:52.381 "base_bdevs_list": [ 00:12:52.381 { 00:12:52.381 "name": "spare", 00:12:52.381 "uuid": "28fe619c-941e-5b68-bfce-d2bd29288be4", 00:12:52.381 "is_configured": true, 00:12:52.381 "data_offset": 0, 00:12:52.381 "data_size": 65536 00:12:52.381 }, 00:12:52.381 { 00:12:52.381 "name": "BaseBdev2", 00:12:52.381 "uuid": "8145ebf6-1d43-57c4-9480-d4196e2b60a4", 00:12:52.381 "is_configured": true, 00:12:52.381 "data_offset": 0, 00:12:52.381 "data_size": 65536 00:12:52.381 } 00:12:52.381 ] 00:12:52.381 }' 00:12:52.381 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.381 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:52.381 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.381 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:52.381 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:52.381 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:52.381 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.381 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:52.381 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:52.381 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.381 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.381 09:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.381 09:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.381 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.381 09:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.381 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.381 "name": "raid_bdev1", 00:12:52.381 "uuid": "faf50142-f98e-426a-9fc8-86fa62758ac8", 00:12:52.381 "strip_size_kb": 0, 00:12:52.381 "state": "online", 00:12:52.382 "raid_level": "raid1", 00:12:52.382 "superblock": false, 00:12:52.382 "num_base_bdevs": 2, 00:12:52.382 "num_base_bdevs_discovered": 2, 00:12:52.382 "num_base_bdevs_operational": 2, 00:12:52.382 "base_bdevs_list": [ 00:12:52.382 { 00:12:52.382 "name": "spare", 00:12:52.382 "uuid": "28fe619c-941e-5b68-bfce-d2bd29288be4", 00:12:52.382 "is_configured": true, 00:12:52.382 "data_offset": 0, 00:12:52.382 "data_size": 65536 00:12:52.382 }, 00:12:52.382 { 00:12:52.382 "name": "BaseBdev2", 00:12:52.382 "uuid": "8145ebf6-1d43-57c4-9480-d4196e2b60a4", 00:12:52.382 "is_configured": true, 00:12:52.382 "data_offset": 0, 00:12:52.382 "data_size": 65536 00:12:52.382 } 00:12:52.382 ] 00:12:52.382 }' 00:12:52.382 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.382 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:52.382 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.382 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:52.382 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:52.382 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.382 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.382 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.382 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.382 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:52.382 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.382 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.382 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.382 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.382 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.382 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.382 09:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.382 09:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.641 09:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.641 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.641 "name": "raid_bdev1", 00:12:52.641 "uuid": "faf50142-f98e-426a-9fc8-86fa62758ac8", 00:12:52.641 "strip_size_kb": 0, 00:12:52.641 "state": "online", 00:12:52.641 "raid_level": "raid1", 00:12:52.641 "superblock": false, 00:12:52.641 "num_base_bdevs": 2, 00:12:52.641 "num_base_bdevs_discovered": 2, 00:12:52.641 "num_base_bdevs_operational": 2, 00:12:52.641 "base_bdevs_list": [ 00:12:52.641 { 00:12:52.641 "name": "spare", 00:12:52.641 "uuid": "28fe619c-941e-5b68-bfce-d2bd29288be4", 00:12:52.641 "is_configured": true, 00:12:52.641 "data_offset": 0, 00:12:52.641 "data_size": 65536 00:12:52.641 }, 00:12:52.641 { 00:12:52.641 "name": "BaseBdev2", 00:12:52.641 "uuid": "8145ebf6-1d43-57c4-9480-d4196e2b60a4", 00:12:52.641 "is_configured": true, 00:12:52.641 "data_offset": 0, 00:12:52.641 "data_size": 65536 00:12:52.641 } 00:12:52.641 ] 00:12:52.641 }' 00:12:52.641 09:31:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.641 09:31:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.902 09:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:52.902 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.902 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.902 [2024-11-15 09:31:41.252741] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:52.902 [2024-11-15 09:31:41.252782] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:52.902 [2024-11-15 09:31:41.252894] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:52.902 [2024-11-15 09:31:41.252974] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:52.902 [2024-11-15 09:31:41.252986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:52.902 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.902 09:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:52.902 09:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.902 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.902 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.902 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.902 09:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:52.902 09:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:52.902 09:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:52.902 09:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:52.902 09:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:52.902 09:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:52.902 09:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:52.902 09:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:52.902 09:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:52.902 09:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:52.902 09:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:52.902 09:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:52.902 09:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:53.165 /dev/nbd0 00:12:53.165 09:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:53.165 09:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:53.165 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:53.165 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:12:53.165 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:53.165 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:53.165 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:53.165 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:12:53.165 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:53.165 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:53.165 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.165 1+0 records in 00:12:53.165 1+0 records out 00:12:53.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420626 s, 9.7 MB/s 00:12:53.165 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.165 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:12:53.165 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.165 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:53.165 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:12:53.165 09:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.165 09:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:53.165 09:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:53.437 /dev/nbd1 00:12:53.437 09:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:53.437 09:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:53.437 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:53.437 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:12:53.437 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:53.437 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:53.437 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:53.437 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:12:53.437 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:53.437 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:53.437 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.437 1+0 records in 00:12:53.437 1+0 records out 00:12:53.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430935 s, 9.5 MB/s 00:12:53.437 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.437 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:12:53.437 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.437 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:53.437 09:31:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:12:53.437 09:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.437 09:31:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:53.437 09:31:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:53.696 09:31:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:53.696 09:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:53.696 09:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:53.696 09:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:53.696 09:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:53.696 09:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.696 09:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:53.956 09:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:53.956 09:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:53.956 09:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:53.956 09:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.956 09:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.956 09:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:53.956 09:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:53.956 09:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.956 09:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.956 09:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:54.215 09:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:54.215 09:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:54.215 09:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:54.215 09:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.215 09:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.215 09:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:54.215 09:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:54.215 09:31:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.215 09:31:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:54.215 09:31:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75710 00:12:54.215 09:31:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 75710 ']' 00:12:54.215 09:31:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 75710 00:12:54.215 09:31:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:12:54.215 09:31:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:54.215 09:31:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75710 00:12:54.215 09:31:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:54.215 09:31:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:54.215 09:31:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75710' 00:12:54.215 killing process with pid 75710 00:12:54.215 09:31:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 75710 00:12:54.215 Received shutdown signal, test time was about 60.000000 seconds 00:12:54.215 00:12:54.215 Latency(us) 00:12:54.215 [2024-11-15T09:31:42.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.215 [2024-11-15T09:31:42.678Z] =================================================================================================================== 00:12:54.215 [2024-11-15T09:31:42.678Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:54.215 [2024-11-15 09:31:42.564089] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:54.215 09:31:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 75710 00:12:54.475 [2024-11-15 09:31:42.881328] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:55.856 09:31:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:55.856 00:12:55.856 real 0m16.601s 00:12:55.856 user 0m18.037s 00:12:55.856 sys 0m3.249s 00:12:55.856 ************************************ 00:12:55.856 END TEST raid_rebuild_test 00:12:55.856 09:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:55.856 09:31:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.856 ************************************ 00:12:55.856 09:31:44 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:55.856 09:31:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:12:55.857 09:31:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:55.857 09:31:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:55.857 ************************************ 00:12:55.857 START TEST raid_rebuild_test_sb 00:12:55.857 ************************************ 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76141 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76141 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 76141 ']' 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:55.857 09:31:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.857 [2024-11-15 09:31:44.188065] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:12:55.857 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:55.857 Zero copy mechanism will not be used. 00:12:55.857 [2024-11-15 09:31:44.188308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76141 ] 00:12:56.117 [2024-11-15 09:31:44.366707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.117 [2024-11-15 09:31:44.487727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.391 [2024-11-15 09:31:44.696417] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:56.391 [2024-11-15 09:31:44.696575] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:56.684 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:56.684 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:12:56.684 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:56.684 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:56.684 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.684 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.684 BaseBdev1_malloc 00:12:56.685 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.685 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:56.685 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.685 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.685 [2024-11-15 09:31:45.112387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:56.685 [2024-11-15 09:31:45.112542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.685 [2024-11-15 09:31:45.112587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:56.685 [2024-11-15 09:31:45.112620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.685 [2024-11-15 09:31:45.114883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.685 [2024-11-15 09:31:45.114963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:56.685 BaseBdev1 00:12:56.685 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.685 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:56.685 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:56.685 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.685 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.945 BaseBdev2_malloc 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.945 [2024-11-15 09:31:45.170411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:56.945 [2024-11-15 09:31:45.170488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.945 [2024-11-15 09:31:45.170508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:56.945 [2024-11-15 09:31:45.170521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.945 [2024-11-15 09:31:45.172873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.945 [2024-11-15 09:31:45.172919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:56.945 BaseBdev2 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.945 spare_malloc 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.945 spare_delay 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.945 [2024-11-15 09:31:45.251080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:56.945 [2024-11-15 09:31:45.251151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.945 [2024-11-15 09:31:45.251188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:56.945 [2024-11-15 09:31:45.251199] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.945 [2024-11-15 09:31:45.253493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.945 [2024-11-15 09:31:45.253616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:56.945 spare 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.945 [2024-11-15 09:31:45.263163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.945 [2024-11-15 09:31:45.265102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:56.945 [2024-11-15 09:31:45.265310] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:56.945 [2024-11-15 09:31:45.265327] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:56.945 [2024-11-15 09:31:45.265612] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:56.945 [2024-11-15 09:31:45.265772] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:56.945 [2024-11-15 09:31:45.265781] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:56.945 [2024-11-15 09:31:45.265962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.945 "name": "raid_bdev1", 00:12:56.945 "uuid": "5c2d691a-2de8-4474-aba3-8a6b097893e5", 00:12:56.945 "strip_size_kb": 0, 00:12:56.945 "state": "online", 00:12:56.945 "raid_level": "raid1", 00:12:56.945 "superblock": true, 00:12:56.945 "num_base_bdevs": 2, 00:12:56.945 "num_base_bdevs_discovered": 2, 00:12:56.945 "num_base_bdevs_operational": 2, 00:12:56.945 "base_bdevs_list": [ 00:12:56.945 { 00:12:56.945 "name": "BaseBdev1", 00:12:56.945 "uuid": "47799fb7-86b7-5bf8-924c-5f93abd05e98", 00:12:56.945 "is_configured": true, 00:12:56.945 "data_offset": 2048, 00:12:56.945 "data_size": 63488 00:12:56.945 }, 00:12:56.945 { 00:12:56.945 "name": "BaseBdev2", 00:12:56.945 "uuid": "318deb3f-1f99-551d-8f60-17b019fca94c", 00:12:56.945 "is_configured": true, 00:12:56.945 "data_offset": 2048, 00:12:56.945 "data_size": 63488 00:12:56.945 } 00:12:56.945 ] 00:12:56.945 }' 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.945 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.515 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:57.515 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:57.515 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.515 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.515 [2024-11-15 09:31:45.726635] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.515 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.515 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:57.515 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:57.515 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.515 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.515 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.515 09:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.515 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:57.515 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:57.515 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:57.515 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:57.515 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:57.515 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:57.515 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:57.515 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:57.515 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:57.515 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:57.515 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:57.515 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:57.515 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:57.515 09:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:57.775 [2024-11-15 09:31:46.009928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:57.775 /dev/nbd0 00:12:57.775 09:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:57.775 09:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:57.775 09:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:57.775 09:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:12:57.775 09:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:57.775 09:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:57.775 09:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:57.775 09:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:12:57.775 09:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:57.775 09:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:57.775 09:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.775 1+0 records in 00:12:57.775 1+0 records out 00:12:57.775 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402175 s, 10.2 MB/s 00:12:57.775 09:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.775 09:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:12:57.775 09:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.775 09:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:57.775 09:31:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:12:57.775 09:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:57.775 09:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:57.775 09:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:57.775 09:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:57.775 09:31:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:03.052 63488+0 records in 00:13:03.052 63488+0 records out 00:13:03.052 32505856 bytes (33 MB, 31 MiB) copied, 4.68139 s, 6.9 MB/s 00:13:03.052 09:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:03.052 09:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:03.052 09:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:03.052 09:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:03.052 09:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:03.052 09:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.052 09:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:03.052 [2024-11-15 09:31:50.972970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.052 09:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:03.052 09:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:03.052 09:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:03.052 09:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.052 09:31:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.052 09:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:03.052 09:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:03.052 09:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.052 09:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:03.052 09:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.052 09:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.053 [2024-11-15 09:31:51.013456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:03.053 09:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.053 09:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:03.053 09:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.053 09:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.053 09:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.053 09:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.053 09:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:03.053 09:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.053 09:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.053 09:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.053 09:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.053 09:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.053 09:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.053 09:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.053 09:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.053 09:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.053 09:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.053 "name": "raid_bdev1", 00:13:03.053 "uuid": "5c2d691a-2de8-4474-aba3-8a6b097893e5", 00:13:03.053 "strip_size_kb": 0, 00:13:03.053 "state": "online", 00:13:03.053 "raid_level": "raid1", 00:13:03.053 "superblock": true, 00:13:03.053 "num_base_bdevs": 2, 00:13:03.053 "num_base_bdevs_discovered": 1, 00:13:03.053 "num_base_bdevs_operational": 1, 00:13:03.053 "base_bdevs_list": [ 00:13:03.053 { 00:13:03.053 "name": null, 00:13:03.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.053 "is_configured": false, 00:13:03.053 "data_offset": 0, 00:13:03.053 "data_size": 63488 00:13:03.053 }, 00:13:03.053 { 00:13:03.053 "name": "BaseBdev2", 00:13:03.053 "uuid": "318deb3f-1f99-551d-8f60-17b019fca94c", 00:13:03.053 "is_configured": true, 00:13:03.053 "data_offset": 2048, 00:13:03.053 "data_size": 63488 00:13:03.053 } 00:13:03.053 ] 00:13:03.053 }' 00:13:03.053 09:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.053 09:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.053 09:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:03.053 09:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.053 09:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.053 [2024-11-15 09:31:51.468707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:03.053 [2024-11-15 09:31:51.485957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:13:03.053 09:31:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.053 09:31:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:03.053 [2024-11-15 09:31:51.488149] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.433 "name": "raid_bdev1", 00:13:04.433 "uuid": "5c2d691a-2de8-4474-aba3-8a6b097893e5", 00:13:04.433 "strip_size_kb": 0, 00:13:04.433 "state": "online", 00:13:04.433 "raid_level": "raid1", 00:13:04.433 "superblock": true, 00:13:04.433 "num_base_bdevs": 2, 00:13:04.433 "num_base_bdevs_discovered": 2, 00:13:04.433 "num_base_bdevs_operational": 2, 00:13:04.433 "process": { 00:13:04.433 "type": "rebuild", 00:13:04.433 "target": "spare", 00:13:04.433 "progress": { 00:13:04.433 "blocks": 20480, 00:13:04.433 "percent": 32 00:13:04.433 } 00:13:04.433 }, 00:13:04.433 "base_bdevs_list": [ 00:13:04.433 { 00:13:04.433 "name": "spare", 00:13:04.433 "uuid": "80f168f1-9896-539f-b49d-9a637b7157cc", 00:13:04.433 "is_configured": true, 00:13:04.433 "data_offset": 2048, 00:13:04.433 "data_size": 63488 00:13:04.433 }, 00:13:04.433 { 00:13:04.433 "name": "BaseBdev2", 00:13:04.433 "uuid": "318deb3f-1f99-551d-8f60-17b019fca94c", 00:13:04.433 "is_configured": true, 00:13:04.433 "data_offset": 2048, 00:13:04.433 "data_size": 63488 00:13:04.433 } 00:13:04.433 ] 00:13:04.433 }' 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.433 [2024-11-15 09:31:52.655722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:04.433 [2024-11-15 09:31:52.693924] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:04.433 [2024-11-15 09:31:52.694044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.433 [2024-11-15 09:31:52.694061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:04.433 [2024-11-15 09:31:52.694074] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.433 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.433 "name": "raid_bdev1", 00:13:04.433 "uuid": "5c2d691a-2de8-4474-aba3-8a6b097893e5", 00:13:04.433 "strip_size_kb": 0, 00:13:04.433 "state": "online", 00:13:04.433 "raid_level": "raid1", 00:13:04.433 "superblock": true, 00:13:04.433 "num_base_bdevs": 2, 00:13:04.433 "num_base_bdevs_discovered": 1, 00:13:04.433 "num_base_bdevs_operational": 1, 00:13:04.433 "base_bdevs_list": [ 00:13:04.433 { 00:13:04.433 "name": null, 00:13:04.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.433 "is_configured": false, 00:13:04.433 "data_offset": 0, 00:13:04.433 "data_size": 63488 00:13:04.433 }, 00:13:04.433 { 00:13:04.433 "name": "BaseBdev2", 00:13:04.433 "uuid": "318deb3f-1f99-551d-8f60-17b019fca94c", 00:13:04.433 "is_configured": true, 00:13:04.434 "data_offset": 2048, 00:13:04.434 "data_size": 63488 00:13:04.434 } 00:13:04.434 ] 00:13:04.434 }' 00:13:04.434 09:31:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.434 09:31:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.002 09:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:05.002 09:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.002 09:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:05.002 09:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:05.002 09:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.002 09:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.002 09:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.002 09:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.002 09:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.002 09:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.002 09:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.002 "name": "raid_bdev1", 00:13:05.002 "uuid": "5c2d691a-2de8-4474-aba3-8a6b097893e5", 00:13:05.002 "strip_size_kb": 0, 00:13:05.002 "state": "online", 00:13:05.002 "raid_level": "raid1", 00:13:05.002 "superblock": true, 00:13:05.002 "num_base_bdevs": 2, 00:13:05.002 "num_base_bdevs_discovered": 1, 00:13:05.002 "num_base_bdevs_operational": 1, 00:13:05.002 "base_bdevs_list": [ 00:13:05.002 { 00:13:05.002 "name": null, 00:13:05.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.002 "is_configured": false, 00:13:05.002 "data_offset": 0, 00:13:05.002 "data_size": 63488 00:13:05.002 }, 00:13:05.002 { 00:13:05.002 "name": "BaseBdev2", 00:13:05.002 "uuid": "318deb3f-1f99-551d-8f60-17b019fca94c", 00:13:05.002 "is_configured": true, 00:13:05.002 "data_offset": 2048, 00:13:05.002 "data_size": 63488 00:13:05.002 } 00:13:05.002 ] 00:13:05.002 }' 00:13:05.003 09:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.003 09:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:05.003 09:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.003 09:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:05.003 09:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:05.003 09:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.003 09:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.003 [2024-11-15 09:31:53.331945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:05.003 [2024-11-15 09:31:53.348262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:05.003 09:31:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.003 09:31:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:05.003 [2024-11-15 09:31:53.350129] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:05.941 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.941 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.941 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.941 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.941 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.941 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.941 09:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.941 09:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.941 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.941 09:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.941 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.941 "name": "raid_bdev1", 00:13:05.941 "uuid": "5c2d691a-2de8-4474-aba3-8a6b097893e5", 00:13:05.941 "strip_size_kb": 0, 00:13:05.941 "state": "online", 00:13:05.941 "raid_level": "raid1", 00:13:05.941 "superblock": true, 00:13:05.941 "num_base_bdevs": 2, 00:13:05.941 "num_base_bdevs_discovered": 2, 00:13:05.941 "num_base_bdevs_operational": 2, 00:13:05.941 "process": { 00:13:05.941 "type": "rebuild", 00:13:05.941 "target": "spare", 00:13:05.941 "progress": { 00:13:05.941 "blocks": 20480, 00:13:05.941 "percent": 32 00:13:05.941 } 00:13:05.941 }, 00:13:05.941 "base_bdevs_list": [ 00:13:05.941 { 00:13:05.941 "name": "spare", 00:13:05.941 "uuid": "80f168f1-9896-539f-b49d-9a637b7157cc", 00:13:05.941 "is_configured": true, 00:13:05.941 "data_offset": 2048, 00:13:05.941 "data_size": 63488 00:13:05.941 }, 00:13:05.941 { 00:13:05.941 "name": "BaseBdev2", 00:13:05.941 "uuid": "318deb3f-1f99-551d-8f60-17b019fca94c", 00:13:05.941 "is_configured": true, 00:13:05.941 "data_offset": 2048, 00:13:05.941 "data_size": 63488 00:13:05.941 } 00:13:05.941 ] 00:13:05.941 }' 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:06.201 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=408 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.201 "name": "raid_bdev1", 00:13:06.201 "uuid": "5c2d691a-2de8-4474-aba3-8a6b097893e5", 00:13:06.201 "strip_size_kb": 0, 00:13:06.201 "state": "online", 00:13:06.201 "raid_level": "raid1", 00:13:06.201 "superblock": true, 00:13:06.201 "num_base_bdevs": 2, 00:13:06.201 "num_base_bdevs_discovered": 2, 00:13:06.201 "num_base_bdevs_operational": 2, 00:13:06.201 "process": { 00:13:06.201 "type": "rebuild", 00:13:06.201 "target": "spare", 00:13:06.201 "progress": { 00:13:06.201 "blocks": 22528, 00:13:06.201 "percent": 35 00:13:06.201 } 00:13:06.201 }, 00:13:06.201 "base_bdevs_list": [ 00:13:06.201 { 00:13:06.201 "name": "spare", 00:13:06.201 "uuid": "80f168f1-9896-539f-b49d-9a637b7157cc", 00:13:06.201 "is_configured": true, 00:13:06.201 "data_offset": 2048, 00:13:06.201 "data_size": 63488 00:13:06.201 }, 00:13:06.201 { 00:13:06.201 "name": "BaseBdev2", 00:13:06.201 "uuid": "318deb3f-1f99-551d-8f60-17b019fca94c", 00:13:06.201 "is_configured": true, 00:13:06.201 "data_offset": 2048, 00:13:06.201 "data_size": 63488 00:13:06.201 } 00:13:06.201 ] 00:13:06.201 }' 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.201 09:31:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:07.151 09:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:07.151 09:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.151 09:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.151 09:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.151 09:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.151 09:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.411 09:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.411 09:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.411 09:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.411 09:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.411 09:31:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.411 09:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.411 "name": "raid_bdev1", 00:13:07.411 "uuid": "5c2d691a-2de8-4474-aba3-8a6b097893e5", 00:13:07.411 "strip_size_kb": 0, 00:13:07.411 "state": "online", 00:13:07.411 "raid_level": "raid1", 00:13:07.411 "superblock": true, 00:13:07.411 "num_base_bdevs": 2, 00:13:07.411 "num_base_bdevs_discovered": 2, 00:13:07.411 "num_base_bdevs_operational": 2, 00:13:07.411 "process": { 00:13:07.411 "type": "rebuild", 00:13:07.411 "target": "spare", 00:13:07.411 "progress": { 00:13:07.411 "blocks": 45056, 00:13:07.411 "percent": 70 00:13:07.411 } 00:13:07.411 }, 00:13:07.411 "base_bdevs_list": [ 00:13:07.411 { 00:13:07.411 "name": "spare", 00:13:07.411 "uuid": "80f168f1-9896-539f-b49d-9a637b7157cc", 00:13:07.411 "is_configured": true, 00:13:07.411 "data_offset": 2048, 00:13:07.411 "data_size": 63488 00:13:07.411 }, 00:13:07.411 { 00:13:07.411 "name": "BaseBdev2", 00:13:07.411 "uuid": "318deb3f-1f99-551d-8f60-17b019fca94c", 00:13:07.411 "is_configured": true, 00:13:07.411 "data_offset": 2048, 00:13:07.411 "data_size": 63488 00:13:07.411 } 00:13:07.411 ] 00:13:07.411 }' 00:13:07.411 09:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.411 09:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.411 09:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.411 09:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.411 09:31:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:08.350 [2024-11-15 09:31:56.464126] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:08.350 [2024-11-15 09:31:56.464333] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:08.350 [2024-11-15 09:31:56.464494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.350 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:08.350 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.351 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.351 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.351 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.351 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.351 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.351 09:31:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.351 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.351 09:31:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.351 09:31:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.351 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.351 "name": "raid_bdev1", 00:13:08.351 "uuid": "5c2d691a-2de8-4474-aba3-8a6b097893e5", 00:13:08.351 "strip_size_kb": 0, 00:13:08.351 "state": "online", 00:13:08.351 "raid_level": "raid1", 00:13:08.351 "superblock": true, 00:13:08.351 "num_base_bdevs": 2, 00:13:08.351 "num_base_bdevs_discovered": 2, 00:13:08.351 "num_base_bdevs_operational": 2, 00:13:08.351 "base_bdevs_list": [ 00:13:08.351 { 00:13:08.351 "name": "spare", 00:13:08.351 "uuid": "80f168f1-9896-539f-b49d-9a637b7157cc", 00:13:08.351 "is_configured": true, 00:13:08.351 "data_offset": 2048, 00:13:08.351 "data_size": 63488 00:13:08.351 }, 00:13:08.351 { 00:13:08.351 "name": "BaseBdev2", 00:13:08.351 "uuid": "318deb3f-1f99-551d-8f60-17b019fca94c", 00:13:08.351 "is_configured": true, 00:13:08.351 "data_offset": 2048, 00:13:08.351 "data_size": 63488 00:13:08.351 } 00:13:08.351 ] 00:13:08.351 }' 00:13:08.351 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.611 "name": "raid_bdev1", 00:13:08.611 "uuid": "5c2d691a-2de8-4474-aba3-8a6b097893e5", 00:13:08.611 "strip_size_kb": 0, 00:13:08.611 "state": "online", 00:13:08.611 "raid_level": "raid1", 00:13:08.611 "superblock": true, 00:13:08.611 "num_base_bdevs": 2, 00:13:08.611 "num_base_bdevs_discovered": 2, 00:13:08.611 "num_base_bdevs_operational": 2, 00:13:08.611 "base_bdevs_list": [ 00:13:08.611 { 00:13:08.611 "name": "spare", 00:13:08.611 "uuid": "80f168f1-9896-539f-b49d-9a637b7157cc", 00:13:08.611 "is_configured": true, 00:13:08.611 "data_offset": 2048, 00:13:08.611 "data_size": 63488 00:13:08.611 }, 00:13:08.611 { 00:13:08.611 "name": "BaseBdev2", 00:13:08.611 "uuid": "318deb3f-1f99-551d-8f60-17b019fca94c", 00:13:08.611 "is_configured": true, 00:13:08.611 "data_offset": 2048, 00:13:08.611 "data_size": 63488 00:13:08.611 } 00:13:08.611 ] 00:13:08.611 }' 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.611 09:31:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.611 09:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.611 09:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.611 "name": "raid_bdev1", 00:13:08.611 "uuid": "5c2d691a-2de8-4474-aba3-8a6b097893e5", 00:13:08.611 "strip_size_kb": 0, 00:13:08.611 "state": "online", 00:13:08.611 "raid_level": "raid1", 00:13:08.611 "superblock": true, 00:13:08.611 "num_base_bdevs": 2, 00:13:08.611 "num_base_bdevs_discovered": 2, 00:13:08.611 "num_base_bdevs_operational": 2, 00:13:08.611 "base_bdevs_list": [ 00:13:08.611 { 00:13:08.611 "name": "spare", 00:13:08.611 "uuid": "80f168f1-9896-539f-b49d-9a637b7157cc", 00:13:08.611 "is_configured": true, 00:13:08.611 "data_offset": 2048, 00:13:08.611 "data_size": 63488 00:13:08.611 }, 00:13:08.611 { 00:13:08.611 "name": "BaseBdev2", 00:13:08.611 "uuid": "318deb3f-1f99-551d-8f60-17b019fca94c", 00:13:08.611 "is_configured": true, 00:13:08.611 "data_offset": 2048, 00:13:08.611 "data_size": 63488 00:13:08.611 } 00:13:08.611 ] 00:13:08.611 }' 00:13:08.611 09:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.611 09:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.181 09:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:09.181 09:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.181 09:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.181 [2024-11-15 09:31:57.456557] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:09.181 [2024-11-15 09:31:57.456647] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:09.181 [2024-11-15 09:31:57.456753] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:09.181 [2024-11-15 09:31:57.456840] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:09.181 [2024-11-15 09:31:57.456920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:09.181 09:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.181 09:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.181 09:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.181 09:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.181 09:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:09.181 09:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.181 09:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:09.181 09:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:09.181 09:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:09.181 09:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:09.181 09:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.181 09:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:09.181 09:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:09.181 09:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:09.181 09:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:09.181 09:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:09.181 09:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:09.181 09:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:09.182 09:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:09.468 /dev/nbd0 00:13:09.468 09:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:09.468 09:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:09.468 09:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:09.468 09:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:13:09.468 09:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:09.468 09:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:09.468 09:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:09.468 09:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:13:09.468 09:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:09.468 09:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:09.468 09:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.468 1+0 records in 00:13:09.468 1+0 records out 00:13:09.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293396 s, 14.0 MB/s 00:13:09.468 09:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.468 09:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:13:09.468 09:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.468 09:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:09.468 09:31:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:13:09.468 09:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:09.468 09:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:09.468 09:31:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:09.727 /dev/nbd1 00:13:09.727 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:09.727 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:09.727 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:09.727 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:13:09.727 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:09.727 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:09.727 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:09.727 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:13:09.727 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:09.727 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:09.727 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:09.727 1+0 records in 00:13:09.727 1+0 records out 00:13:09.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383539 s, 10.7 MB/s 00:13:09.727 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.727 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:13:09.727 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:09.727 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:09.727 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:13:09.727 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:09.727 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:09.727 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:09.986 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:09.986 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.986 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:09.986 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:09.986 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:09.986 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.986 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:09.986 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:09.986 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:09.986 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:09.986 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.986 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.986 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:09.986 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:09.986 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.986 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.986 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:10.245 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:10.245 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:10.245 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:10.245 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:10.245 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:10.245 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:10.245 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:10.245 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:10.245 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:10.245 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:10.245 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.245 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.246 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.246 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:10.246 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.246 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.246 [2024-11-15 09:31:58.670856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:10.246 [2024-11-15 09:31:58.670922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.246 [2024-11-15 09:31:58.670944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:10.246 [2024-11-15 09:31:58.670954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.246 [2024-11-15 09:31:58.673106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.246 [2024-11-15 09:31:58.673146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:10.246 [2024-11-15 09:31:58.673236] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:10.246 [2024-11-15 09:31:58.673290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:10.246 [2024-11-15 09:31:58.673430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:10.246 spare 00:13:10.246 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.246 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:10.246 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.246 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.506 [2024-11-15 09:31:58.773326] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:10.506 [2024-11-15 09:31:58.773362] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:10.506 [2024-11-15 09:31:58.773661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:10.506 [2024-11-15 09:31:58.773830] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:10.506 [2024-11-15 09:31:58.773839] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:10.506 [2024-11-15 09:31:58.774025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.506 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.506 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:10.506 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.506 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.506 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.506 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.506 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:10.506 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.506 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.506 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.506 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.506 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.506 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.506 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.506 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.506 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.506 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.506 "name": "raid_bdev1", 00:13:10.506 "uuid": "5c2d691a-2de8-4474-aba3-8a6b097893e5", 00:13:10.506 "strip_size_kb": 0, 00:13:10.506 "state": "online", 00:13:10.506 "raid_level": "raid1", 00:13:10.506 "superblock": true, 00:13:10.506 "num_base_bdevs": 2, 00:13:10.506 "num_base_bdevs_discovered": 2, 00:13:10.506 "num_base_bdevs_operational": 2, 00:13:10.506 "base_bdevs_list": [ 00:13:10.506 { 00:13:10.506 "name": "spare", 00:13:10.506 "uuid": "80f168f1-9896-539f-b49d-9a637b7157cc", 00:13:10.506 "is_configured": true, 00:13:10.506 "data_offset": 2048, 00:13:10.506 "data_size": 63488 00:13:10.506 }, 00:13:10.506 { 00:13:10.506 "name": "BaseBdev2", 00:13:10.506 "uuid": "318deb3f-1f99-551d-8f60-17b019fca94c", 00:13:10.506 "is_configured": true, 00:13:10.506 "data_offset": 2048, 00:13:10.506 "data_size": 63488 00:13:10.506 } 00:13:10.506 ] 00:13:10.506 }' 00:13:10.506 09:31:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.506 09:31:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.765 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:10.765 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.765 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:10.765 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:10.765 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.765 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.765 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.765 09:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.765 09:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.765 09:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.765 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.765 "name": "raid_bdev1", 00:13:10.765 "uuid": "5c2d691a-2de8-4474-aba3-8a6b097893e5", 00:13:10.765 "strip_size_kb": 0, 00:13:10.765 "state": "online", 00:13:10.765 "raid_level": "raid1", 00:13:10.765 "superblock": true, 00:13:10.765 "num_base_bdevs": 2, 00:13:10.765 "num_base_bdevs_discovered": 2, 00:13:10.765 "num_base_bdevs_operational": 2, 00:13:10.765 "base_bdevs_list": [ 00:13:10.765 { 00:13:10.765 "name": "spare", 00:13:10.765 "uuid": "80f168f1-9896-539f-b49d-9a637b7157cc", 00:13:10.765 "is_configured": true, 00:13:10.766 "data_offset": 2048, 00:13:10.766 "data_size": 63488 00:13:10.766 }, 00:13:10.766 { 00:13:10.766 "name": "BaseBdev2", 00:13:10.766 "uuid": "318deb3f-1f99-551d-8f60-17b019fca94c", 00:13:10.766 "is_configured": true, 00:13:10.766 "data_offset": 2048, 00:13:10.766 "data_size": 63488 00:13:10.766 } 00:13:10.766 ] 00:13:10.766 }' 00:13:10.766 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.766 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:10.766 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.026 [2024-11-15 09:31:59.325823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.026 "name": "raid_bdev1", 00:13:11.026 "uuid": "5c2d691a-2de8-4474-aba3-8a6b097893e5", 00:13:11.026 "strip_size_kb": 0, 00:13:11.026 "state": "online", 00:13:11.026 "raid_level": "raid1", 00:13:11.026 "superblock": true, 00:13:11.026 "num_base_bdevs": 2, 00:13:11.026 "num_base_bdevs_discovered": 1, 00:13:11.026 "num_base_bdevs_operational": 1, 00:13:11.026 "base_bdevs_list": [ 00:13:11.026 { 00:13:11.026 "name": null, 00:13:11.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.026 "is_configured": false, 00:13:11.026 "data_offset": 0, 00:13:11.026 "data_size": 63488 00:13:11.026 }, 00:13:11.026 { 00:13:11.026 "name": "BaseBdev2", 00:13:11.026 "uuid": "318deb3f-1f99-551d-8f60-17b019fca94c", 00:13:11.026 "is_configured": true, 00:13:11.026 "data_offset": 2048, 00:13:11.026 "data_size": 63488 00:13:11.026 } 00:13:11.026 ] 00:13:11.026 }' 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.026 09:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.595 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:11.595 09:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.595 09:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.595 [2024-11-15 09:31:59.765104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:11.595 [2024-11-15 09:31:59.765300] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:11.595 [2024-11-15 09:31:59.765317] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:11.595 [2024-11-15 09:31:59.765354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:11.595 [2024-11-15 09:31:59.781330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:11.595 09:31:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.595 09:31:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:11.595 [2024-11-15 09:31:59.783204] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:12.531 09:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.531 09:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.531 09:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.531 09:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.531 09:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.531 09:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.531 09:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.531 09:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.531 09:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.531 09:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.531 09:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.531 "name": "raid_bdev1", 00:13:12.531 "uuid": "5c2d691a-2de8-4474-aba3-8a6b097893e5", 00:13:12.531 "strip_size_kb": 0, 00:13:12.531 "state": "online", 00:13:12.531 "raid_level": "raid1", 00:13:12.531 "superblock": true, 00:13:12.531 "num_base_bdevs": 2, 00:13:12.531 "num_base_bdevs_discovered": 2, 00:13:12.531 "num_base_bdevs_operational": 2, 00:13:12.531 "process": { 00:13:12.531 "type": "rebuild", 00:13:12.531 "target": "spare", 00:13:12.531 "progress": { 00:13:12.531 "blocks": 20480, 00:13:12.531 "percent": 32 00:13:12.531 } 00:13:12.531 }, 00:13:12.531 "base_bdevs_list": [ 00:13:12.531 { 00:13:12.531 "name": "spare", 00:13:12.531 "uuid": "80f168f1-9896-539f-b49d-9a637b7157cc", 00:13:12.531 "is_configured": true, 00:13:12.531 "data_offset": 2048, 00:13:12.531 "data_size": 63488 00:13:12.531 }, 00:13:12.531 { 00:13:12.531 "name": "BaseBdev2", 00:13:12.531 "uuid": "318deb3f-1f99-551d-8f60-17b019fca94c", 00:13:12.531 "is_configured": true, 00:13:12.531 "data_offset": 2048, 00:13:12.531 "data_size": 63488 00:13:12.531 } 00:13:12.531 ] 00:13:12.531 }' 00:13:12.531 09:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.531 09:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:12.531 09:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.531 09:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.531 09:32:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:12.531 09:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.531 09:32:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.531 [2024-11-15 09:32:00.924284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:12.531 [2024-11-15 09:32:00.988670] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:12.531 [2024-11-15 09:32:00.988752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.531 [2024-11-15 09:32:00.988769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:12.531 [2024-11-15 09:32:00.988778] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:12.790 09:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.790 09:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:12.790 09:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.790 09:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.790 09:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.790 09:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.790 09:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:12.790 09:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.790 09:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.790 09:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.790 09:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.790 09:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.790 09:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.790 09:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.790 09:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.790 09:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.790 09:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.790 "name": "raid_bdev1", 00:13:12.790 "uuid": "5c2d691a-2de8-4474-aba3-8a6b097893e5", 00:13:12.790 "strip_size_kb": 0, 00:13:12.790 "state": "online", 00:13:12.790 "raid_level": "raid1", 00:13:12.790 "superblock": true, 00:13:12.790 "num_base_bdevs": 2, 00:13:12.790 "num_base_bdevs_discovered": 1, 00:13:12.790 "num_base_bdevs_operational": 1, 00:13:12.790 "base_bdevs_list": [ 00:13:12.790 { 00:13:12.790 "name": null, 00:13:12.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.790 "is_configured": false, 00:13:12.790 "data_offset": 0, 00:13:12.790 "data_size": 63488 00:13:12.790 }, 00:13:12.790 { 00:13:12.790 "name": "BaseBdev2", 00:13:12.790 "uuid": "318deb3f-1f99-551d-8f60-17b019fca94c", 00:13:12.790 "is_configured": true, 00:13:12.790 "data_offset": 2048, 00:13:12.790 "data_size": 63488 00:13:12.790 } 00:13:12.790 ] 00:13:12.790 }' 00:13:12.790 09:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.790 09:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.049 09:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:13.049 09:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.049 09:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.308 [2024-11-15 09:32:01.520405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:13.308 [2024-11-15 09:32:01.520485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.308 [2024-11-15 09:32:01.520511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:13.308 [2024-11-15 09:32:01.520523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.308 [2024-11-15 09:32:01.521061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.308 [2024-11-15 09:32:01.521083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:13.308 [2024-11-15 09:32:01.521190] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:13.308 [2024-11-15 09:32:01.521207] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:13.308 [2024-11-15 09:32:01.521218] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:13.308 [2024-11-15 09:32:01.521241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:13.308 [2024-11-15 09:32:01.537733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:13.308 spare 00:13:13.308 09:32:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.308 09:32:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:13.308 [2024-11-15 09:32:01.539849] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:14.247 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.247 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.247 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.247 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.247 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.247 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.247 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.247 09:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.247 09:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.247 09:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.247 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.247 "name": "raid_bdev1", 00:13:14.247 "uuid": "5c2d691a-2de8-4474-aba3-8a6b097893e5", 00:13:14.247 "strip_size_kb": 0, 00:13:14.247 "state": "online", 00:13:14.247 "raid_level": "raid1", 00:13:14.247 "superblock": true, 00:13:14.247 "num_base_bdevs": 2, 00:13:14.247 "num_base_bdevs_discovered": 2, 00:13:14.247 "num_base_bdevs_operational": 2, 00:13:14.247 "process": { 00:13:14.247 "type": "rebuild", 00:13:14.247 "target": "spare", 00:13:14.247 "progress": { 00:13:14.247 "blocks": 20480, 00:13:14.247 "percent": 32 00:13:14.247 } 00:13:14.247 }, 00:13:14.247 "base_bdevs_list": [ 00:13:14.247 { 00:13:14.247 "name": "spare", 00:13:14.247 "uuid": "80f168f1-9896-539f-b49d-9a637b7157cc", 00:13:14.247 "is_configured": true, 00:13:14.247 "data_offset": 2048, 00:13:14.247 "data_size": 63488 00:13:14.247 }, 00:13:14.247 { 00:13:14.247 "name": "BaseBdev2", 00:13:14.247 "uuid": "318deb3f-1f99-551d-8f60-17b019fca94c", 00:13:14.247 "is_configured": true, 00:13:14.247 "data_offset": 2048, 00:13:14.247 "data_size": 63488 00:13:14.247 } 00:13:14.247 ] 00:13:14.247 }' 00:13:14.247 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.247 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:14.247 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.247 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.247 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:14.247 09:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.247 09:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.247 [2024-11-15 09:32:02.700131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.506 [2024-11-15 09:32:02.745609] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:14.506 [2024-11-15 09:32:02.745755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.506 [2024-11-15 09:32:02.745793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:14.506 [2024-11-15 09:32:02.745801] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:14.506 09:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.506 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:14.506 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.506 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.506 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.506 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.506 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:14.506 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.506 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.506 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.506 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.506 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.506 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.506 09:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.506 09:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.506 09:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.506 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.506 "name": "raid_bdev1", 00:13:14.506 "uuid": "5c2d691a-2de8-4474-aba3-8a6b097893e5", 00:13:14.506 "strip_size_kb": 0, 00:13:14.506 "state": "online", 00:13:14.506 "raid_level": "raid1", 00:13:14.506 "superblock": true, 00:13:14.506 "num_base_bdevs": 2, 00:13:14.506 "num_base_bdevs_discovered": 1, 00:13:14.506 "num_base_bdevs_operational": 1, 00:13:14.506 "base_bdevs_list": [ 00:13:14.506 { 00:13:14.506 "name": null, 00:13:14.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.506 "is_configured": false, 00:13:14.506 "data_offset": 0, 00:13:14.506 "data_size": 63488 00:13:14.506 }, 00:13:14.506 { 00:13:14.506 "name": "BaseBdev2", 00:13:14.506 "uuid": "318deb3f-1f99-551d-8f60-17b019fca94c", 00:13:14.506 "is_configured": true, 00:13:14.506 "data_offset": 2048, 00:13:14.506 "data_size": 63488 00:13:14.506 } 00:13:14.506 ] 00:13:14.506 }' 00:13:14.506 09:32:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.506 09:32:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.766 09:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:14.766 09:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.766 09:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:14.766 09:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:14.766 09:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.766 09:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.766 09:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.766 09:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.766 09:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.025 09:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.025 09:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.025 "name": "raid_bdev1", 00:13:15.025 "uuid": "5c2d691a-2de8-4474-aba3-8a6b097893e5", 00:13:15.025 "strip_size_kb": 0, 00:13:15.025 "state": "online", 00:13:15.025 "raid_level": "raid1", 00:13:15.025 "superblock": true, 00:13:15.025 "num_base_bdevs": 2, 00:13:15.025 "num_base_bdevs_discovered": 1, 00:13:15.025 "num_base_bdevs_operational": 1, 00:13:15.025 "base_bdevs_list": [ 00:13:15.025 { 00:13:15.025 "name": null, 00:13:15.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.025 "is_configured": false, 00:13:15.025 "data_offset": 0, 00:13:15.025 "data_size": 63488 00:13:15.025 }, 00:13:15.025 { 00:13:15.025 "name": "BaseBdev2", 00:13:15.025 "uuid": "318deb3f-1f99-551d-8f60-17b019fca94c", 00:13:15.025 "is_configured": true, 00:13:15.025 "data_offset": 2048, 00:13:15.025 "data_size": 63488 00:13:15.025 } 00:13:15.025 ] 00:13:15.025 }' 00:13:15.025 09:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.025 09:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:15.025 09:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.025 09:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:15.025 09:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:15.025 09:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.025 09:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.025 09:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.025 09:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:15.025 09:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.025 09:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.025 [2024-11-15 09:32:03.366439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:15.025 [2024-11-15 09:32:03.366505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.025 [2024-11-15 09:32:03.366534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:15.025 [2024-11-15 09:32:03.366551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.025 [2024-11-15 09:32:03.367042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.025 [2024-11-15 09:32:03.367059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:15.025 [2024-11-15 09:32:03.367172] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:15.025 [2024-11-15 09:32:03.367194] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:15.025 [2024-11-15 09:32:03.367206] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:15.026 [2024-11-15 09:32:03.367217] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:15.026 BaseBdev1 00:13:15.026 09:32:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.026 09:32:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:15.965 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:15.965 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.965 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.965 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.965 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.965 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:15.965 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.965 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.965 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.965 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.965 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.965 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.965 09:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.965 09:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.965 09:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.225 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.225 "name": "raid_bdev1", 00:13:16.225 "uuid": "5c2d691a-2de8-4474-aba3-8a6b097893e5", 00:13:16.225 "strip_size_kb": 0, 00:13:16.225 "state": "online", 00:13:16.225 "raid_level": "raid1", 00:13:16.225 "superblock": true, 00:13:16.225 "num_base_bdevs": 2, 00:13:16.225 "num_base_bdevs_discovered": 1, 00:13:16.225 "num_base_bdevs_operational": 1, 00:13:16.225 "base_bdevs_list": [ 00:13:16.225 { 00:13:16.225 "name": null, 00:13:16.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.225 "is_configured": false, 00:13:16.225 "data_offset": 0, 00:13:16.225 "data_size": 63488 00:13:16.225 }, 00:13:16.225 { 00:13:16.225 "name": "BaseBdev2", 00:13:16.225 "uuid": "318deb3f-1f99-551d-8f60-17b019fca94c", 00:13:16.225 "is_configured": true, 00:13:16.225 "data_offset": 2048, 00:13:16.225 "data_size": 63488 00:13:16.225 } 00:13:16.225 ] 00:13:16.225 }' 00:13:16.225 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.225 09:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.484 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:16.484 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.484 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:16.484 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:16.484 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.484 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.484 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.484 09:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.484 09:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.484 09:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.484 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.484 "name": "raid_bdev1", 00:13:16.484 "uuid": "5c2d691a-2de8-4474-aba3-8a6b097893e5", 00:13:16.484 "strip_size_kb": 0, 00:13:16.484 "state": "online", 00:13:16.484 "raid_level": "raid1", 00:13:16.484 "superblock": true, 00:13:16.484 "num_base_bdevs": 2, 00:13:16.484 "num_base_bdevs_discovered": 1, 00:13:16.484 "num_base_bdevs_operational": 1, 00:13:16.484 "base_bdevs_list": [ 00:13:16.484 { 00:13:16.484 "name": null, 00:13:16.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.484 "is_configured": false, 00:13:16.484 "data_offset": 0, 00:13:16.484 "data_size": 63488 00:13:16.484 }, 00:13:16.484 { 00:13:16.484 "name": "BaseBdev2", 00:13:16.484 "uuid": "318deb3f-1f99-551d-8f60-17b019fca94c", 00:13:16.484 "is_configured": true, 00:13:16.484 "data_offset": 2048, 00:13:16.484 "data_size": 63488 00:13:16.484 } 00:13:16.484 ] 00:13:16.484 }' 00:13:16.484 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.484 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:16.484 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.744 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:16.744 09:32:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:16.744 09:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:13:16.744 09:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:16.744 09:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:16.745 09:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:16.745 09:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:16.745 09:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:16.745 09:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:16.745 09:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.745 09:32:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.745 [2024-11-15 09:32:05.003926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:16.745 [2024-11-15 09:32:05.004111] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:16.745 [2024-11-15 09:32:05.004129] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:16.745 request: 00:13:16.745 { 00:13:16.745 "base_bdev": "BaseBdev1", 00:13:16.745 "raid_bdev": "raid_bdev1", 00:13:16.745 "method": "bdev_raid_add_base_bdev", 00:13:16.745 "req_id": 1 00:13:16.745 } 00:13:16.745 Got JSON-RPC error response 00:13:16.745 response: 00:13:16.745 { 00:13:16.745 "code": -22, 00:13:16.745 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:16.745 } 00:13:16.745 09:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:16.745 09:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:13:16.745 09:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:16.745 09:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:16.745 09:32:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:16.745 09:32:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:17.713 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:17.713 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.713 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.713 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.713 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.713 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:17.713 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.713 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.713 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.713 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.713 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.713 09:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.713 09:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.713 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.713 09:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.713 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.713 "name": "raid_bdev1", 00:13:17.713 "uuid": "5c2d691a-2de8-4474-aba3-8a6b097893e5", 00:13:17.713 "strip_size_kb": 0, 00:13:17.713 "state": "online", 00:13:17.713 "raid_level": "raid1", 00:13:17.713 "superblock": true, 00:13:17.713 "num_base_bdevs": 2, 00:13:17.713 "num_base_bdevs_discovered": 1, 00:13:17.713 "num_base_bdevs_operational": 1, 00:13:17.713 "base_bdevs_list": [ 00:13:17.713 { 00:13:17.713 "name": null, 00:13:17.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.713 "is_configured": false, 00:13:17.713 "data_offset": 0, 00:13:17.713 "data_size": 63488 00:13:17.713 }, 00:13:17.713 { 00:13:17.713 "name": "BaseBdev2", 00:13:17.713 "uuid": "318deb3f-1f99-551d-8f60-17b019fca94c", 00:13:17.713 "is_configured": true, 00:13:17.713 "data_offset": 2048, 00:13:17.713 "data_size": 63488 00:13:17.713 } 00:13:17.713 ] 00:13:17.713 }' 00:13:17.713 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.713 09:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.282 "name": "raid_bdev1", 00:13:18.282 "uuid": "5c2d691a-2de8-4474-aba3-8a6b097893e5", 00:13:18.282 "strip_size_kb": 0, 00:13:18.282 "state": "online", 00:13:18.282 "raid_level": "raid1", 00:13:18.282 "superblock": true, 00:13:18.282 "num_base_bdevs": 2, 00:13:18.282 "num_base_bdevs_discovered": 1, 00:13:18.282 "num_base_bdevs_operational": 1, 00:13:18.282 "base_bdevs_list": [ 00:13:18.282 { 00:13:18.282 "name": null, 00:13:18.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.282 "is_configured": false, 00:13:18.282 "data_offset": 0, 00:13:18.282 "data_size": 63488 00:13:18.282 }, 00:13:18.282 { 00:13:18.282 "name": "BaseBdev2", 00:13:18.282 "uuid": "318deb3f-1f99-551d-8f60-17b019fca94c", 00:13:18.282 "is_configured": true, 00:13:18.282 "data_offset": 2048, 00:13:18.282 "data_size": 63488 00:13:18.282 } 00:13:18.282 ] 00:13:18.282 }' 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76141 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 76141 ']' 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 76141 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76141 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76141' 00:13:18.282 killing process with pid 76141 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 76141 00:13:18.282 Received shutdown signal, test time was about 60.000000 seconds 00:13:18.282 00:13:18.282 Latency(us) 00:13:18.282 [2024-11-15T09:32:06.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.282 [2024-11-15T09:32:06.745Z] =================================================================================================================== 00:13:18.282 [2024-11-15T09:32:06.745Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:18.282 09:32:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 76141 00:13:18.282 [2024-11-15 09:32:06.669840] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:18.282 [2024-11-15 09:32:06.669995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.282 [2024-11-15 09:32:06.670048] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:18.282 [2024-11-15 09:32:06.670061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:18.541 [2024-11-15 09:32:06.987064] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:19.918 00:13:19.918 real 0m24.137s 00:13:19.918 user 0m28.718s 00:13:19.918 sys 0m4.131s 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:19.918 ************************************ 00:13:19.918 END TEST raid_rebuild_test_sb 00:13:19.918 ************************************ 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.918 09:32:08 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:19.918 09:32:08 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:19.918 09:32:08 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:19.918 09:32:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:19.918 ************************************ 00:13:19.918 START TEST raid_rebuild_test_io 00:13:19.918 ************************************ 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false true true 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:19.918 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:19.919 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:19.919 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:19.919 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76877 00:13:19.919 09:32:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76877 00:13:19.919 09:32:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 76877 ']' 00:13:19.919 09:32:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.919 09:32:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:19.919 09:32:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.919 09:32:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:19.919 09:32:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.919 [2024-11-15 09:32:08.364155] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:13:19.919 [2024-11-15 09:32:08.364373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76877 ] 00:13:19.919 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:19.919 Zero copy mechanism will not be used. 00:13:20.178 [2024-11-15 09:32:08.541862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.438 [2024-11-15 09:32:08.673809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.438 [2024-11-15 09:32:08.896728] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.438 [2024-11-15 09:32:08.896842] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.008 BaseBdev1_malloc 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.008 [2024-11-15 09:32:09.296375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:21.008 [2024-11-15 09:32:09.296449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.008 [2024-11-15 09:32:09.296475] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:21.008 [2024-11-15 09:32:09.296488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.008 [2024-11-15 09:32:09.298768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.008 [2024-11-15 09:32:09.298807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:21.008 BaseBdev1 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.008 BaseBdev2_malloc 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.008 [2024-11-15 09:32:09.351993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:21.008 [2024-11-15 09:32:09.352055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.008 [2024-11-15 09:32:09.352074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:21.008 [2024-11-15 09:32:09.352085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.008 [2024-11-15 09:32:09.354098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.008 [2024-11-15 09:32:09.354133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:21.008 BaseBdev2 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.008 spare_malloc 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.008 spare_delay 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.008 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.008 [2024-11-15 09:32:09.432305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:21.008 [2024-11-15 09:32:09.432422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.008 [2024-11-15 09:32:09.432452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:21.008 [2024-11-15 09:32:09.432464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.008 [2024-11-15 09:32:09.434872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.009 [2024-11-15 09:32:09.434920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:21.009 spare 00:13:21.009 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.009 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:21.009 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.009 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.009 [2024-11-15 09:32:09.444334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:21.009 [2024-11-15 09:32:09.446124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:21.009 [2024-11-15 09:32:09.446209] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:21.009 [2024-11-15 09:32:09.446223] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:21.009 [2024-11-15 09:32:09.446473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:21.009 [2024-11-15 09:32:09.446631] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:21.009 [2024-11-15 09:32:09.446641] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:21.009 [2024-11-15 09:32:09.446780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.009 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.009 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:21.009 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.009 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.009 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.009 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.009 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:21.009 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.009 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.009 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.009 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.009 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.009 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.009 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.009 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.009 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.268 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.268 "name": "raid_bdev1", 00:13:21.268 "uuid": "3a9d12cb-8dbc-4290-980a-b61b2e063283", 00:13:21.268 "strip_size_kb": 0, 00:13:21.268 "state": "online", 00:13:21.268 "raid_level": "raid1", 00:13:21.268 "superblock": false, 00:13:21.268 "num_base_bdevs": 2, 00:13:21.268 "num_base_bdevs_discovered": 2, 00:13:21.268 "num_base_bdevs_operational": 2, 00:13:21.268 "base_bdevs_list": [ 00:13:21.268 { 00:13:21.268 "name": "BaseBdev1", 00:13:21.268 "uuid": "6044dc62-d6e3-5244-8f24-20705f35fd45", 00:13:21.268 "is_configured": true, 00:13:21.268 "data_offset": 0, 00:13:21.268 "data_size": 65536 00:13:21.268 }, 00:13:21.268 { 00:13:21.268 "name": "BaseBdev2", 00:13:21.268 "uuid": "642ba788-94f4-5236-93e2-4ddd139528bd", 00:13:21.268 "is_configured": true, 00:13:21.268 "data_offset": 0, 00:13:21.268 "data_size": 65536 00:13:21.268 } 00:13:21.268 ] 00:13:21.268 }' 00:13:21.268 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.268 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.528 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:21.528 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.528 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.528 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:21.528 [2024-11-15 09:32:09.908055] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:21.528 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.528 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:21.528 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.528 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:21.528 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.528 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.528 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.528 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:21.528 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:21.528 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:21.528 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:21.528 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.528 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.528 [2024-11-15 09:32:09.991540] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:21.788 09:32:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.788 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:21.788 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.788 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.788 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.788 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.788 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:21.788 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.788 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.788 09:32:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.788 09:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.788 09:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.788 09:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.788 09:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.788 09:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.788 09:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.788 09:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.788 "name": "raid_bdev1", 00:13:21.788 "uuid": "3a9d12cb-8dbc-4290-980a-b61b2e063283", 00:13:21.788 "strip_size_kb": 0, 00:13:21.788 "state": "online", 00:13:21.788 "raid_level": "raid1", 00:13:21.788 "superblock": false, 00:13:21.788 "num_base_bdevs": 2, 00:13:21.788 "num_base_bdevs_discovered": 1, 00:13:21.788 "num_base_bdevs_operational": 1, 00:13:21.788 "base_bdevs_list": [ 00:13:21.788 { 00:13:21.788 "name": null, 00:13:21.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.788 "is_configured": false, 00:13:21.788 "data_offset": 0, 00:13:21.788 "data_size": 65536 00:13:21.788 }, 00:13:21.788 { 00:13:21.788 "name": "BaseBdev2", 00:13:21.788 "uuid": "642ba788-94f4-5236-93e2-4ddd139528bd", 00:13:21.788 "is_configured": true, 00:13:21.788 "data_offset": 0, 00:13:21.788 "data_size": 65536 00:13:21.788 } 00:13:21.788 ] 00:13:21.788 }' 00:13:21.788 09:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.788 09:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.788 [2024-11-15 09:32:10.095503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:21.788 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:21.788 Zero copy mechanism will not be used. 00:13:21.788 Running I/O for 60 seconds... 00:13:22.048 09:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:22.048 09:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.048 09:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.048 [2024-11-15 09:32:10.463048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:22.048 09:32:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.048 09:32:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:22.048 [2024-11-15 09:32:10.508973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:22.048 [2024-11-15 09:32:10.510918] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:22.307 [2024-11-15 09:32:10.630096] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:22.307 [2024-11-15 09:32:10.630644] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:22.566 [2024-11-15 09:32:10.859981] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:22.566 [2024-11-15 09:32:10.860336] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:22.826 183.00 IOPS, 549.00 MiB/s [2024-11-15T09:32:11.289Z] [2024-11-15 09:32:11.125694] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:22.826 [2024-11-15 09:32:11.126269] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:23.086 [2024-11-15 09:32:11.359023] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:23.086 [2024-11-15 09:32:11.359308] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:23.086 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.086 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.086 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.086 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.086 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.086 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.086 09:32:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.086 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.086 09:32:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.086 09:32:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.345 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.345 "name": "raid_bdev1", 00:13:23.345 "uuid": "3a9d12cb-8dbc-4290-980a-b61b2e063283", 00:13:23.345 "strip_size_kb": 0, 00:13:23.345 "state": "online", 00:13:23.345 "raid_level": "raid1", 00:13:23.345 "superblock": false, 00:13:23.345 "num_base_bdevs": 2, 00:13:23.345 "num_base_bdevs_discovered": 2, 00:13:23.345 "num_base_bdevs_operational": 2, 00:13:23.345 "process": { 00:13:23.345 "type": "rebuild", 00:13:23.345 "target": "spare", 00:13:23.345 "progress": { 00:13:23.345 "blocks": 10240, 00:13:23.345 "percent": 15 00:13:23.345 } 00:13:23.345 }, 00:13:23.345 "base_bdevs_list": [ 00:13:23.345 { 00:13:23.345 "name": "spare", 00:13:23.345 "uuid": "3926a900-d0cb-574b-94a7-f729bf93ac89", 00:13:23.345 "is_configured": true, 00:13:23.345 "data_offset": 0, 00:13:23.345 "data_size": 65536 00:13:23.345 }, 00:13:23.345 { 00:13:23.345 "name": "BaseBdev2", 00:13:23.345 "uuid": "642ba788-94f4-5236-93e2-4ddd139528bd", 00:13:23.345 "is_configured": true, 00:13:23.345 "data_offset": 0, 00:13:23.345 "data_size": 65536 00:13:23.345 } 00:13:23.345 ] 00:13:23.345 }' 00:13:23.345 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.345 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:23.345 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.345 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:23.345 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:23.345 09:32:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.345 09:32:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.345 [2024-11-15 09:32:11.656552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:23.345 [2024-11-15 09:32:11.807795] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:23.604 [2024-11-15 09:32:11.816515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.604 [2024-11-15 09:32:11.816561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:23.604 [2024-11-15 09:32:11.816577] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:23.604 [2024-11-15 09:32:11.867844] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:23.604 09:32:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.604 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:23.604 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.604 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.604 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.604 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.604 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:23.604 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.604 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.604 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.604 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.604 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.604 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.604 09:32:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.604 09:32:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.604 09:32:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.604 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.604 "name": "raid_bdev1", 00:13:23.604 "uuid": "3a9d12cb-8dbc-4290-980a-b61b2e063283", 00:13:23.604 "strip_size_kb": 0, 00:13:23.604 "state": "online", 00:13:23.604 "raid_level": "raid1", 00:13:23.604 "superblock": false, 00:13:23.604 "num_base_bdevs": 2, 00:13:23.604 "num_base_bdevs_discovered": 1, 00:13:23.604 "num_base_bdevs_operational": 1, 00:13:23.604 "base_bdevs_list": [ 00:13:23.604 { 00:13:23.604 "name": null, 00:13:23.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.604 "is_configured": false, 00:13:23.604 "data_offset": 0, 00:13:23.604 "data_size": 65536 00:13:23.604 }, 00:13:23.604 { 00:13:23.604 "name": "BaseBdev2", 00:13:23.604 "uuid": "642ba788-94f4-5236-93e2-4ddd139528bd", 00:13:23.604 "is_configured": true, 00:13:23.604 "data_offset": 0, 00:13:23.604 "data_size": 65536 00:13:23.604 } 00:13:23.604 ] 00:13:23.604 }' 00:13:23.604 09:32:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.604 09:32:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.134 163.50 IOPS, 490.50 MiB/s [2024-11-15T09:32:12.597Z] 09:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:24.134 09:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.134 09:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:24.134 09:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:24.134 09:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.134 09:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.134 09:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.134 09:32:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.134 09:32:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.134 09:32:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.134 09:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.134 "name": "raid_bdev1", 00:13:24.134 "uuid": "3a9d12cb-8dbc-4290-980a-b61b2e063283", 00:13:24.134 "strip_size_kb": 0, 00:13:24.134 "state": "online", 00:13:24.134 "raid_level": "raid1", 00:13:24.134 "superblock": false, 00:13:24.134 "num_base_bdevs": 2, 00:13:24.134 "num_base_bdevs_discovered": 1, 00:13:24.134 "num_base_bdevs_operational": 1, 00:13:24.134 "base_bdevs_list": [ 00:13:24.134 { 00:13:24.134 "name": null, 00:13:24.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.134 "is_configured": false, 00:13:24.134 "data_offset": 0, 00:13:24.134 "data_size": 65536 00:13:24.134 }, 00:13:24.134 { 00:13:24.134 "name": "BaseBdev2", 00:13:24.134 "uuid": "642ba788-94f4-5236-93e2-4ddd139528bd", 00:13:24.134 "is_configured": true, 00:13:24.134 "data_offset": 0, 00:13:24.134 "data_size": 65536 00:13:24.134 } 00:13:24.134 ] 00:13:24.134 }' 00:13:24.134 09:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.134 09:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:24.134 09:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.134 09:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:24.134 09:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:24.134 09:32:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.134 09:32:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.134 [2024-11-15 09:32:12.531197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:24.134 09:32:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.134 09:32:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:24.134 [2024-11-15 09:32:12.586569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:24.134 [2024-11-15 09:32:12.588496] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:24.393 [2024-11-15 09:32:12.695595] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:24.393 [2024-11-15 09:32:12.696198] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:24.652 [2024-11-15 09:32:12.905412] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:24.652 [2024-11-15 09:32:12.905745] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:24.912 171.00 IOPS, 513.00 MiB/s [2024-11-15T09:32:13.375Z] [2024-11-15 09:32:13.139027] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:24.912 [2024-11-15 09:32:13.261222] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:25.172 [2024-11-15 09:32:13.481437] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:25.172 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.172 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.172 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.172 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.172 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.172 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.172 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.172 09:32:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.172 09:32:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.172 09:32:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.172 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.172 "name": "raid_bdev1", 00:13:25.172 "uuid": "3a9d12cb-8dbc-4290-980a-b61b2e063283", 00:13:25.172 "strip_size_kb": 0, 00:13:25.172 "state": "online", 00:13:25.172 "raid_level": "raid1", 00:13:25.172 "superblock": false, 00:13:25.172 "num_base_bdevs": 2, 00:13:25.172 "num_base_bdevs_discovered": 2, 00:13:25.172 "num_base_bdevs_operational": 2, 00:13:25.172 "process": { 00:13:25.172 "type": "rebuild", 00:13:25.172 "target": "spare", 00:13:25.172 "progress": { 00:13:25.172 "blocks": 14336, 00:13:25.172 "percent": 21 00:13:25.172 } 00:13:25.172 }, 00:13:25.172 "base_bdevs_list": [ 00:13:25.172 { 00:13:25.172 "name": "spare", 00:13:25.172 "uuid": "3926a900-d0cb-574b-94a7-f729bf93ac89", 00:13:25.172 "is_configured": true, 00:13:25.172 "data_offset": 0, 00:13:25.172 "data_size": 65536 00:13:25.172 }, 00:13:25.172 { 00:13:25.172 "name": "BaseBdev2", 00:13:25.172 "uuid": "642ba788-94f4-5236-93e2-4ddd139528bd", 00:13:25.172 "is_configured": true, 00:13:25.172 "data_offset": 0, 00:13:25.172 "data_size": 65536 00:13:25.172 } 00:13:25.172 ] 00:13:25.172 }' 00:13:25.172 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.431 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.431 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.431 [2024-11-15 09:32:13.700215] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:25.431 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.431 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:25.431 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:25.431 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:25.431 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:25.431 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=427 00:13:25.431 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:25.431 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.431 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.431 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.431 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.431 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.431 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.431 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.431 09:32:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.432 09:32:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.432 09:32:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.432 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.432 "name": "raid_bdev1", 00:13:25.432 "uuid": "3a9d12cb-8dbc-4290-980a-b61b2e063283", 00:13:25.432 "strip_size_kb": 0, 00:13:25.432 "state": "online", 00:13:25.432 "raid_level": "raid1", 00:13:25.432 "superblock": false, 00:13:25.432 "num_base_bdevs": 2, 00:13:25.432 "num_base_bdevs_discovered": 2, 00:13:25.432 "num_base_bdevs_operational": 2, 00:13:25.432 "process": { 00:13:25.432 "type": "rebuild", 00:13:25.432 "target": "spare", 00:13:25.432 "progress": { 00:13:25.432 "blocks": 16384, 00:13:25.432 "percent": 25 00:13:25.432 } 00:13:25.432 }, 00:13:25.432 "base_bdevs_list": [ 00:13:25.432 { 00:13:25.432 "name": "spare", 00:13:25.432 "uuid": "3926a900-d0cb-574b-94a7-f729bf93ac89", 00:13:25.432 "is_configured": true, 00:13:25.432 "data_offset": 0, 00:13:25.432 "data_size": 65536 00:13:25.432 }, 00:13:25.432 { 00:13:25.432 "name": "BaseBdev2", 00:13:25.432 "uuid": "642ba788-94f4-5236-93e2-4ddd139528bd", 00:13:25.432 "is_configured": true, 00:13:25.432 "data_offset": 0, 00:13:25.432 "data_size": 65536 00:13:25.432 } 00:13:25.432 ] 00:13:25.432 }' 00:13:25.432 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.432 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.432 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.432 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.432 09:32:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:25.692 [2024-11-15 09:32:13.924149] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:25.692 [2024-11-15 09:32:13.924840] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:25.692 [2024-11-15 09:32:14.041235] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:26.261 155.75 IOPS, 467.25 MiB/s [2024-11-15T09:32:14.724Z] [2024-11-15 09:32:14.607498] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:26.261 [2024-11-15 09:32:14.608138] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:26.520 09:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:26.520 [2024-11-15 09:32:14.824205] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:26.520 09:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.520 09:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.520 09:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.520 09:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.520 09:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.520 09:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.520 09:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.520 09:32:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.520 09:32:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.520 09:32:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.520 09:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.520 "name": "raid_bdev1", 00:13:26.520 "uuid": "3a9d12cb-8dbc-4290-980a-b61b2e063283", 00:13:26.520 "strip_size_kb": 0, 00:13:26.520 "state": "online", 00:13:26.520 "raid_level": "raid1", 00:13:26.520 "superblock": false, 00:13:26.520 "num_base_bdevs": 2, 00:13:26.520 "num_base_bdevs_discovered": 2, 00:13:26.520 "num_base_bdevs_operational": 2, 00:13:26.520 "process": { 00:13:26.520 "type": "rebuild", 00:13:26.520 "target": "spare", 00:13:26.520 "progress": { 00:13:26.520 "blocks": 34816, 00:13:26.520 "percent": 53 00:13:26.520 } 00:13:26.520 }, 00:13:26.520 "base_bdevs_list": [ 00:13:26.520 { 00:13:26.520 "name": "spare", 00:13:26.520 "uuid": "3926a900-d0cb-574b-94a7-f729bf93ac89", 00:13:26.520 "is_configured": true, 00:13:26.520 "data_offset": 0, 00:13:26.520 "data_size": 65536 00:13:26.520 }, 00:13:26.520 { 00:13:26.520 "name": "BaseBdev2", 00:13:26.520 "uuid": "642ba788-94f4-5236-93e2-4ddd139528bd", 00:13:26.520 "is_configured": true, 00:13:26.520 "data_offset": 0, 00:13:26.520 "data_size": 65536 00:13:26.520 } 00:13:26.520 ] 00:13:26.520 }' 00:13:26.520 09:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.520 09:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.520 09:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.520 09:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.520 09:32:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:27.348 134.20 IOPS, 402.60 MiB/s [2024-11-15T09:32:15.811Z] [2024-11-15 09:32:15.640120] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:27.617 09:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:27.617 09:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.617 09:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.617 09:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.617 09:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.617 09:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.617 09:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.617 09:32:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.617 09:32:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.617 09:32:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.617 09:32:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.617 09:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.617 "name": "raid_bdev1", 00:13:27.617 "uuid": "3a9d12cb-8dbc-4290-980a-b61b2e063283", 00:13:27.617 "strip_size_kb": 0, 00:13:27.617 "state": "online", 00:13:27.617 "raid_level": "raid1", 00:13:27.617 "superblock": false, 00:13:27.617 "num_base_bdevs": 2, 00:13:27.617 "num_base_bdevs_discovered": 2, 00:13:27.617 "num_base_bdevs_operational": 2, 00:13:27.617 "process": { 00:13:27.617 "type": "rebuild", 00:13:27.617 "target": "spare", 00:13:27.617 "progress": { 00:13:27.617 "blocks": 57344, 00:13:27.617 "percent": 87 00:13:27.617 } 00:13:27.617 }, 00:13:27.617 "base_bdevs_list": [ 00:13:27.617 { 00:13:27.617 "name": "spare", 00:13:27.617 "uuid": "3926a900-d0cb-574b-94a7-f729bf93ac89", 00:13:27.617 "is_configured": true, 00:13:27.617 "data_offset": 0, 00:13:27.617 "data_size": 65536 00:13:27.617 }, 00:13:27.617 { 00:13:27.617 "name": "BaseBdev2", 00:13:27.617 "uuid": "642ba788-94f4-5236-93e2-4ddd139528bd", 00:13:27.617 "is_configured": true, 00:13:27.617 "data_offset": 0, 00:13:27.617 "data_size": 65536 00:13:27.617 } 00:13:27.617 ] 00:13:27.617 }' 00:13:27.617 09:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.617 09:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.617 09:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.877 [2024-11-15 09:32:16.086444] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:27.877 09:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.877 09:32:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:28.135 116.67 IOPS, 350.00 MiB/s [2024-11-15T09:32:16.598Z] [2024-11-15 09:32:16.531036] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:28.392 [2024-11-15 09:32:16.630821] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:28.392 [2024-11-15 09:32:16.633297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.650 105.00 IOPS, 315.00 MiB/s [2024-11-15T09:32:17.113Z] 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:28.650 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.650 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.650 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.650 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.650 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.650 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.650 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.650 09:32:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.650 09:32:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.910 09:32:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.910 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.910 "name": "raid_bdev1", 00:13:28.910 "uuid": "3a9d12cb-8dbc-4290-980a-b61b2e063283", 00:13:28.910 "strip_size_kb": 0, 00:13:28.910 "state": "online", 00:13:28.910 "raid_level": "raid1", 00:13:28.910 "superblock": false, 00:13:28.910 "num_base_bdevs": 2, 00:13:28.910 "num_base_bdevs_discovered": 2, 00:13:28.910 "num_base_bdevs_operational": 2, 00:13:28.910 "base_bdevs_list": [ 00:13:28.910 { 00:13:28.910 "name": "spare", 00:13:28.910 "uuid": "3926a900-d0cb-574b-94a7-f729bf93ac89", 00:13:28.910 "is_configured": true, 00:13:28.910 "data_offset": 0, 00:13:28.910 "data_size": 65536 00:13:28.910 }, 00:13:28.910 { 00:13:28.910 "name": "BaseBdev2", 00:13:28.910 "uuid": "642ba788-94f4-5236-93e2-4ddd139528bd", 00:13:28.910 "is_configured": true, 00:13:28.910 "data_offset": 0, 00:13:28.910 "data_size": 65536 00:13:28.910 } 00:13:28.910 ] 00:13:28.910 }' 00:13:28.910 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.910 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:28.910 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.910 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:28.910 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:28.910 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:28.910 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.910 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:28.910 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:28.910 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.910 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.910 09:32:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.910 09:32:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.910 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.910 09:32:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.910 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.910 "name": "raid_bdev1", 00:13:28.910 "uuid": "3a9d12cb-8dbc-4290-980a-b61b2e063283", 00:13:28.910 "strip_size_kb": 0, 00:13:28.910 "state": "online", 00:13:28.910 "raid_level": "raid1", 00:13:28.910 "superblock": false, 00:13:28.910 "num_base_bdevs": 2, 00:13:28.910 "num_base_bdevs_discovered": 2, 00:13:28.910 "num_base_bdevs_operational": 2, 00:13:28.910 "base_bdevs_list": [ 00:13:28.910 { 00:13:28.910 "name": "spare", 00:13:28.910 "uuid": "3926a900-d0cb-574b-94a7-f729bf93ac89", 00:13:28.910 "is_configured": true, 00:13:28.910 "data_offset": 0, 00:13:28.910 "data_size": 65536 00:13:28.910 }, 00:13:28.910 { 00:13:28.910 "name": "BaseBdev2", 00:13:28.910 "uuid": "642ba788-94f4-5236-93e2-4ddd139528bd", 00:13:28.910 "is_configured": true, 00:13:28.910 "data_offset": 0, 00:13:28.910 "data_size": 65536 00:13:28.910 } 00:13:28.910 ] 00:13:28.910 }' 00:13:28.910 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.910 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:28.910 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.169 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:29.169 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:29.169 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.169 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.169 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.170 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.170 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:29.170 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.170 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.170 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.170 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.170 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.170 09:32:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.170 09:32:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.170 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.170 09:32:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.170 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.170 "name": "raid_bdev1", 00:13:29.170 "uuid": "3a9d12cb-8dbc-4290-980a-b61b2e063283", 00:13:29.170 "strip_size_kb": 0, 00:13:29.170 "state": "online", 00:13:29.170 "raid_level": "raid1", 00:13:29.170 "superblock": false, 00:13:29.170 "num_base_bdevs": 2, 00:13:29.170 "num_base_bdevs_discovered": 2, 00:13:29.170 "num_base_bdevs_operational": 2, 00:13:29.170 "base_bdevs_list": [ 00:13:29.170 { 00:13:29.170 "name": "spare", 00:13:29.170 "uuid": "3926a900-d0cb-574b-94a7-f729bf93ac89", 00:13:29.170 "is_configured": true, 00:13:29.170 "data_offset": 0, 00:13:29.170 "data_size": 65536 00:13:29.170 }, 00:13:29.170 { 00:13:29.170 "name": "BaseBdev2", 00:13:29.170 "uuid": "642ba788-94f4-5236-93e2-4ddd139528bd", 00:13:29.170 "is_configured": true, 00:13:29.170 "data_offset": 0, 00:13:29.170 "data_size": 65536 00:13:29.170 } 00:13:29.170 ] 00:13:29.170 }' 00:13:29.170 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.170 09:32:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.429 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:29.429 09:32:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.429 09:32:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.429 [2024-11-15 09:32:17.845462] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:29.429 [2024-11-15 09:32:17.845599] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:29.687 00:13:29.687 Latency(us) 00:13:29.687 [2024-11-15T09:32:18.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.687 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:29.687 raid_bdev1 : 7.84 97.80 293.39 0.00 0.00 14070.17 298.70 114015.47 00:13:29.687 [2024-11-15T09:32:18.150Z] =================================================================================================================== 00:13:29.687 [2024-11-15T09:32:18.150Z] Total : 97.80 293.39 0.00 0.00 14070.17 298.70 114015.47 00:13:29.687 [2024-11-15 09:32:17.951740] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.687 [2024-11-15 09:32:17.951880] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:29.687 [2024-11-15 09:32:17.952056] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:29.687 [2024-11-15 09:32:17.952123] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:29.687 { 00:13:29.687 "results": [ 00:13:29.687 { 00:13:29.687 "job": "raid_bdev1", 00:13:29.687 "core_mask": "0x1", 00:13:29.687 "workload": "randrw", 00:13:29.687 "percentage": 50, 00:13:29.687 "status": "finished", 00:13:29.687 "queue_depth": 2, 00:13:29.687 "io_size": 3145728, 00:13:29.687 "runtime": 7.842799, 00:13:29.687 "iops": 97.79671772794381, 00:13:29.687 "mibps": 293.39015318383144, 00:13:29.687 "io_failed": 0, 00:13:29.687 "io_timeout": 0, 00:13:29.687 "avg_latency_us": 14070.17381392939, 00:13:29.687 "min_latency_us": 298.70393013100437, 00:13:29.687 "max_latency_us": 114015.46899563319 00:13:29.687 } 00:13:29.687 ], 00:13:29.687 "core_count": 1 00:13:29.687 } 00:13:29.687 09:32:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.687 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.687 09:32:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.687 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:29.687 09:32:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.687 09:32:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.687 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:29.687 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:29.687 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:29.687 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:29.687 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:29.687 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:29.687 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:29.687 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:29.687 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:29.687 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:29.687 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:29.687 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:29.687 09:32:17 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:29.945 /dev/nbd0 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.945 1+0 records in 00:13:29.945 1+0 records out 00:13:29.945 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482955 s, 8.5 MB/s 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:29.945 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:30.204 /dev/nbd1 00:13:30.204 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:30.204 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:30.204 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:30.204 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:13:30.204 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:30.204 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:30.204 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:30.204 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:13:30.204 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:30.204 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:30.204 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:30.204 1+0 records in 00:13:30.204 1+0 records out 00:13:30.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433381 s, 9.5 MB/s 00:13:30.204 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.204 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:13:30.204 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.204 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:30.204 09:32:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:13:30.204 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:30.204 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:30.204 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:30.461 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:30.461 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.461 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:30.461 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:30.461 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:30.461 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.461 09:32:18 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:30.728 09:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:30.728 09:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:30.728 09:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:30.728 09:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.728 09:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.728 09:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:30.728 09:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:30.728 09:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.728 09:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:30.728 09:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.728 09:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:30.728 09:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:30.728 09:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:30.728 09:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.728 09:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:30.996 09:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:30.996 09:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:30.996 09:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:30.996 09:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.996 09:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.996 09:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:30.996 09:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:30.996 09:32:19 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.996 09:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:30.996 09:32:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76877 00:13:30.996 09:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 76877 ']' 00:13:30.996 09:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 76877 00:13:30.996 09:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:13:30.996 09:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:30.996 09:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76877 00:13:30.996 09:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:30.996 09:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:30.996 09:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76877' 00:13:30.996 killing process with pid 76877 00:13:30.996 09:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 76877 00:13:30.996 Received shutdown signal, test time was about 9.203624 seconds 00:13:30.996 00:13:30.996 Latency(us) 00:13:30.996 [2024-11-15T09:32:19.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.996 [2024-11-15T09:32:19.459Z] =================================================================================================================== 00:13:30.996 [2024-11-15T09:32:19.459Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:30.996 09:32:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 76877 00:13:30.996 [2024-11-15 09:32:19.283579] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:31.254 [2024-11-15 09:32:19.519884] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:32.631 09:32:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:32.631 00:13:32.632 real 0m12.402s 00:13:32.632 user 0m15.719s 00:13:32.632 sys 0m1.491s 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.632 ************************************ 00:13:32.632 END TEST raid_rebuild_test_io 00:13:32.632 ************************************ 00:13:32.632 09:32:20 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:32.632 09:32:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:32.632 09:32:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:32.632 09:32:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:32.632 ************************************ 00:13:32.632 START TEST raid_rebuild_test_sb_io 00:13:32.632 ************************************ 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true true true 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77255 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77255 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 77255 ']' 00:13:32.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:32.632 09:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.632 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:32.632 Zero copy mechanism will not be used. 00:13:32.632 [2024-11-15 09:32:20.865666] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:13:32.632 [2024-11-15 09:32:20.865826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77255 ] 00:13:32.632 [2024-11-15 09:32:21.043333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.892 [2024-11-15 09:32:21.154662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.151 [2024-11-15 09:32:21.356973] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.151 [2024-11-15 09:32:21.357044] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.411 BaseBdev1_malloc 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.411 [2024-11-15 09:32:21.724572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:33.411 [2024-11-15 09:32:21.724645] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.411 [2024-11-15 09:32:21.724670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:33.411 [2024-11-15 09:32:21.724682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.411 [2024-11-15 09:32:21.726754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.411 [2024-11-15 09:32:21.726913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:33.411 BaseBdev1 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.411 BaseBdev2_malloc 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.411 [2024-11-15 09:32:21.779040] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:33.411 [2024-11-15 09:32:21.779104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.411 [2024-11-15 09:32:21.779124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:33.411 [2024-11-15 09:32:21.779136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.411 [2024-11-15 09:32:21.781188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.411 [2024-11-15 09:32:21.781301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:33.411 BaseBdev2 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.411 spare_malloc 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.411 spare_delay 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.411 [2024-11-15 09:32:21.861046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:33.411 [2024-11-15 09:32:21.861114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.411 [2024-11-15 09:32:21.861134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:33.411 [2024-11-15 09:32:21.861144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.411 [2024-11-15 09:32:21.863260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.411 [2024-11-15 09:32:21.863300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:33.411 spare 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.411 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.411 [2024-11-15 09:32:21.873106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:33.411 [2024-11-15 09:32:21.874996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:33.671 [2024-11-15 09:32:21.875262] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:33.671 [2024-11-15 09:32:21.875285] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:33.671 [2024-11-15 09:32:21.875550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:33.671 [2024-11-15 09:32:21.875738] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:33.671 [2024-11-15 09:32:21.875748] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:33.671 [2024-11-15 09:32:21.875931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.671 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.671 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:33.671 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.671 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.671 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.671 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.671 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:33.671 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.671 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.671 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.671 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.671 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.671 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.671 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.671 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.671 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.671 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.671 "name": "raid_bdev1", 00:13:33.671 "uuid": "c525c8af-f719-4711-9ef4-8eadee689b06", 00:13:33.671 "strip_size_kb": 0, 00:13:33.671 "state": "online", 00:13:33.671 "raid_level": "raid1", 00:13:33.671 "superblock": true, 00:13:33.671 "num_base_bdevs": 2, 00:13:33.671 "num_base_bdevs_discovered": 2, 00:13:33.671 "num_base_bdevs_operational": 2, 00:13:33.671 "base_bdevs_list": [ 00:13:33.671 { 00:13:33.671 "name": "BaseBdev1", 00:13:33.671 "uuid": "d03d3062-b405-54bd-81f2-c27e321a6daf", 00:13:33.671 "is_configured": true, 00:13:33.671 "data_offset": 2048, 00:13:33.671 "data_size": 63488 00:13:33.671 }, 00:13:33.671 { 00:13:33.671 "name": "BaseBdev2", 00:13:33.671 "uuid": "dffb1b56-4f61-53df-a04d-87b9c041ba47", 00:13:33.671 "is_configured": true, 00:13:33.671 "data_offset": 2048, 00:13:33.671 "data_size": 63488 00:13:33.671 } 00:13:33.671 ] 00:13:33.671 }' 00:13:33.671 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.671 09:32:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.931 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:33.931 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:33.931 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.931 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.931 [2024-11-15 09:32:22.328592] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:33.931 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.931 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:33.931 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.931 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:33.931 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.931 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.931 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.191 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:34.191 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:34.191 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:34.191 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.191 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.191 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:34.191 [2024-11-15 09:32:22.424139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:34.191 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.191 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:34.191 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.191 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.191 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.191 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.191 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:34.191 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.191 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.191 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.191 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.191 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.191 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.191 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.191 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.191 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.191 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.191 "name": "raid_bdev1", 00:13:34.191 "uuid": "c525c8af-f719-4711-9ef4-8eadee689b06", 00:13:34.191 "strip_size_kb": 0, 00:13:34.191 "state": "online", 00:13:34.191 "raid_level": "raid1", 00:13:34.191 "superblock": true, 00:13:34.191 "num_base_bdevs": 2, 00:13:34.191 "num_base_bdevs_discovered": 1, 00:13:34.191 "num_base_bdevs_operational": 1, 00:13:34.191 "base_bdevs_list": [ 00:13:34.191 { 00:13:34.191 "name": null, 00:13:34.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.191 "is_configured": false, 00:13:34.191 "data_offset": 0, 00:13:34.191 "data_size": 63488 00:13:34.191 }, 00:13:34.191 { 00:13:34.191 "name": "BaseBdev2", 00:13:34.191 "uuid": "dffb1b56-4f61-53df-a04d-87b9c041ba47", 00:13:34.191 "is_configured": true, 00:13:34.191 "data_offset": 2048, 00:13:34.191 "data_size": 63488 00:13:34.191 } 00:13:34.191 ] 00:13:34.191 }' 00:13:34.191 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.191 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.191 [2024-11-15 09:32:22.527751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:34.191 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:34.191 Zero copy mechanism will not be used. 00:13:34.191 Running I/O for 60 seconds... 00:13:34.451 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:34.451 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.451 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.451 [2024-11-15 09:32:22.822742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:34.451 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.451 09:32:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:34.451 [2024-11-15 09:32:22.882357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:34.451 [2024-11-15 09:32:22.884389] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:34.713 [2024-11-15 09:32:22.999178] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:34.713 [2024-11-15 09:32:22.999795] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:34.713 [2024-11-15 09:32:23.116930] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:34.713 [2024-11-15 09:32:23.117244] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:35.303 [2024-11-15 09:32:23.476272] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:35.563 212.00 IOPS, 636.00 MiB/s [2024-11-15T09:32:24.026Z] [2024-11-15 09:32:23.801369] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:35.563 09:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:35.563 09:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.563 09:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:35.563 09:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:35.563 09:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.563 09:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.563 09:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.563 09:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.563 09:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.563 09:32:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.563 [2024-11-15 09:32:23.904340] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:35.563 09:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.563 "name": "raid_bdev1", 00:13:35.563 "uuid": "c525c8af-f719-4711-9ef4-8eadee689b06", 00:13:35.563 "strip_size_kb": 0, 00:13:35.563 "state": "online", 00:13:35.563 "raid_level": "raid1", 00:13:35.563 "superblock": true, 00:13:35.563 "num_base_bdevs": 2, 00:13:35.563 "num_base_bdevs_discovered": 2, 00:13:35.563 "num_base_bdevs_operational": 2, 00:13:35.563 "process": { 00:13:35.563 "type": "rebuild", 00:13:35.563 "target": "spare", 00:13:35.563 "progress": { 00:13:35.563 "blocks": 14336, 00:13:35.563 "percent": 22 00:13:35.563 } 00:13:35.563 }, 00:13:35.563 "base_bdevs_list": [ 00:13:35.563 { 00:13:35.563 "name": "spare", 00:13:35.563 "uuid": "ca21754b-9046-52c2-b9e8-9d906888756e", 00:13:35.563 "is_configured": true, 00:13:35.563 "data_offset": 2048, 00:13:35.563 "data_size": 63488 00:13:35.563 }, 00:13:35.563 { 00:13:35.563 "name": "BaseBdev2", 00:13:35.563 "uuid": "dffb1b56-4f61-53df-a04d-87b9c041ba47", 00:13:35.563 "is_configured": true, 00:13:35.563 "data_offset": 2048, 00:13:35.563 "data_size": 63488 00:13:35.563 } 00:13:35.563 ] 00:13:35.563 }' 00:13:35.563 09:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.563 09:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:35.563 09:32:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.563 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.563 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:35.563 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.563 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.563 [2024-11-15 09:32:24.008902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.828 [2024-11-15 09:32:24.133118] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:35.828 [2024-11-15 09:32:24.147450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.828 [2024-11-15 09:32:24.147501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.828 [2024-11-15 09:32:24.147514] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:35.828 [2024-11-15 09:32:24.189444] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:35.828 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.828 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:35.828 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.828 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.828 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.828 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.828 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:35.828 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.828 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.828 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.828 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.828 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.828 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.828 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.829 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.829 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.829 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.829 "name": "raid_bdev1", 00:13:35.829 "uuid": "c525c8af-f719-4711-9ef4-8eadee689b06", 00:13:35.829 "strip_size_kb": 0, 00:13:35.829 "state": "online", 00:13:35.829 "raid_level": "raid1", 00:13:35.829 "superblock": true, 00:13:35.829 "num_base_bdevs": 2, 00:13:35.829 "num_base_bdevs_discovered": 1, 00:13:35.829 "num_base_bdevs_operational": 1, 00:13:35.829 "base_bdevs_list": [ 00:13:35.829 { 00:13:35.829 "name": null, 00:13:35.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.829 "is_configured": false, 00:13:35.829 "data_offset": 0, 00:13:35.829 "data_size": 63488 00:13:35.829 }, 00:13:35.829 { 00:13:35.829 "name": "BaseBdev2", 00:13:35.829 "uuid": "dffb1b56-4f61-53df-a04d-87b9c041ba47", 00:13:35.829 "is_configured": true, 00:13:35.829 "data_offset": 2048, 00:13:35.829 "data_size": 63488 00:13:35.829 } 00:13:35.829 ] 00:13:35.829 }' 00:13:35.829 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.829 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.351 178.50 IOPS, 535.50 MiB/s [2024-11-15T09:32:24.815Z] 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:36.352 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.352 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:36.352 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:36.352 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.352 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.352 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.352 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.352 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.352 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.352 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.352 "name": "raid_bdev1", 00:13:36.352 "uuid": "c525c8af-f719-4711-9ef4-8eadee689b06", 00:13:36.352 "strip_size_kb": 0, 00:13:36.352 "state": "online", 00:13:36.352 "raid_level": "raid1", 00:13:36.352 "superblock": true, 00:13:36.352 "num_base_bdevs": 2, 00:13:36.352 "num_base_bdevs_discovered": 1, 00:13:36.352 "num_base_bdevs_operational": 1, 00:13:36.352 "base_bdevs_list": [ 00:13:36.352 { 00:13:36.352 "name": null, 00:13:36.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.352 "is_configured": false, 00:13:36.352 "data_offset": 0, 00:13:36.352 "data_size": 63488 00:13:36.352 }, 00:13:36.352 { 00:13:36.352 "name": "BaseBdev2", 00:13:36.352 "uuid": "dffb1b56-4f61-53df-a04d-87b9c041ba47", 00:13:36.352 "is_configured": true, 00:13:36.352 "data_offset": 2048, 00:13:36.352 "data_size": 63488 00:13:36.352 } 00:13:36.352 ] 00:13:36.352 }' 00:13:36.352 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.352 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:36.352 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.352 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:36.352 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:36.352 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.352 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.352 [2024-11-15 09:32:24.794823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:36.612 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.612 09:32:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:36.612 [2024-11-15 09:32:24.840724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:36.612 [2024-11-15 09:32:24.842692] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:36.612 [2024-11-15 09:32:24.956258] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:36.612 [2024-11-15 09:32:24.956650] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:36.872 [2024-11-15 09:32:25.169654] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:36.872 [2024-11-15 09:32:25.170039] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:37.391 167.67 IOPS, 503.00 MiB/s [2024-11-15T09:32:25.854Z] [2024-11-15 09:32:25.609076] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:37.391 [2024-11-15 09:32:25.609391] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:37.391 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.391 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.391 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.391 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.391 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.391 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.391 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.391 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.391 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.651 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.651 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.651 "name": "raid_bdev1", 00:13:37.651 "uuid": "c525c8af-f719-4711-9ef4-8eadee689b06", 00:13:37.651 "strip_size_kb": 0, 00:13:37.651 "state": "online", 00:13:37.651 "raid_level": "raid1", 00:13:37.651 "superblock": true, 00:13:37.651 "num_base_bdevs": 2, 00:13:37.651 "num_base_bdevs_discovered": 2, 00:13:37.651 "num_base_bdevs_operational": 2, 00:13:37.651 "process": { 00:13:37.651 "type": "rebuild", 00:13:37.651 "target": "spare", 00:13:37.651 "progress": { 00:13:37.651 "blocks": 12288, 00:13:37.651 "percent": 19 00:13:37.651 } 00:13:37.651 }, 00:13:37.651 "base_bdevs_list": [ 00:13:37.651 { 00:13:37.651 "name": "spare", 00:13:37.651 "uuid": "ca21754b-9046-52c2-b9e8-9d906888756e", 00:13:37.651 "is_configured": true, 00:13:37.651 "data_offset": 2048, 00:13:37.651 "data_size": 63488 00:13:37.651 }, 00:13:37.651 { 00:13:37.651 "name": "BaseBdev2", 00:13:37.651 "uuid": "dffb1b56-4f61-53df-a04d-87b9c041ba47", 00:13:37.651 "is_configured": true, 00:13:37.651 "data_offset": 2048, 00:13:37.651 "data_size": 63488 00:13:37.651 } 00:13:37.651 ] 00:13:37.651 }' 00:13:37.651 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.651 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.651 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.651 [2024-11-15 09:32:25.923185] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:37.651 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.651 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:37.651 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:37.651 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:37.651 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:37.651 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:37.651 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:37.651 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=439 00:13:37.651 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:37.651 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.651 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.651 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.651 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.651 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.651 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.651 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.651 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.651 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.651 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.651 09:32:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.651 "name": "raid_bdev1", 00:13:37.651 "uuid": "c525c8af-f719-4711-9ef4-8eadee689b06", 00:13:37.651 "strip_size_kb": 0, 00:13:37.651 "state": "online", 00:13:37.651 "raid_level": "raid1", 00:13:37.651 "superblock": true, 00:13:37.651 "num_base_bdevs": 2, 00:13:37.651 "num_base_bdevs_discovered": 2, 00:13:37.651 "num_base_bdevs_operational": 2, 00:13:37.651 "process": { 00:13:37.651 "type": "rebuild", 00:13:37.651 "target": "spare", 00:13:37.651 "progress": { 00:13:37.651 "blocks": 14336, 00:13:37.651 "percent": 22 00:13:37.651 } 00:13:37.651 }, 00:13:37.651 "base_bdevs_list": [ 00:13:37.651 { 00:13:37.651 "name": "spare", 00:13:37.651 "uuid": "ca21754b-9046-52c2-b9e8-9d906888756e", 00:13:37.651 "is_configured": true, 00:13:37.651 "data_offset": 2048, 00:13:37.651 "data_size": 63488 00:13:37.651 }, 00:13:37.651 { 00:13:37.651 "name": "BaseBdev2", 00:13:37.651 "uuid": "dffb1b56-4f61-53df-a04d-87b9c041ba47", 00:13:37.651 "is_configured": true, 00:13:37.651 "data_offset": 2048, 00:13:37.651 "data_size": 63488 00:13:37.651 } 00:13:37.651 ] 00:13:37.651 }' 00:13:37.651 09:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.651 09:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.651 09:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.651 09:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.651 09:32:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:37.911 [2024-11-15 09:32:26.153378] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:37.911 [2024-11-15 09:32:26.376295] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:38.171 [2024-11-15 09:32:26.490805] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:38.171 [2024-11-15 09:32:26.491133] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:38.431 142.75 IOPS, 428.25 MiB/s [2024-11-15T09:32:26.894Z] [2024-11-15 09:32:26.807036] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:38.431 [2024-11-15 09:32:26.807352] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:38.692 [2024-11-15 09:32:27.042422] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:38.692 09:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:38.692 09:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.692 09:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.692 09:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.692 09:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.692 09:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.692 09:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.692 09:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.692 09:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.692 09:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.692 09:32:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.692 09:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.692 "name": "raid_bdev1", 00:13:38.692 "uuid": "c525c8af-f719-4711-9ef4-8eadee689b06", 00:13:38.692 "strip_size_kb": 0, 00:13:38.692 "state": "online", 00:13:38.692 "raid_level": "raid1", 00:13:38.692 "superblock": true, 00:13:38.692 "num_base_bdevs": 2, 00:13:38.692 "num_base_bdevs_discovered": 2, 00:13:38.692 "num_base_bdevs_operational": 2, 00:13:38.692 "process": { 00:13:38.692 "type": "rebuild", 00:13:38.692 "target": "spare", 00:13:38.692 "progress": { 00:13:38.692 "blocks": 32768, 00:13:38.692 "percent": 51 00:13:38.692 } 00:13:38.692 }, 00:13:38.692 "base_bdevs_list": [ 00:13:38.692 { 00:13:38.692 "name": "spare", 00:13:38.692 "uuid": "ca21754b-9046-52c2-b9e8-9d906888756e", 00:13:38.692 "is_configured": true, 00:13:38.692 "data_offset": 2048, 00:13:38.692 "data_size": 63488 00:13:38.692 }, 00:13:38.692 { 00:13:38.692 "name": "BaseBdev2", 00:13:38.692 "uuid": "dffb1b56-4f61-53df-a04d-87b9c041ba47", 00:13:38.692 "is_configured": true, 00:13:38.692 "data_offset": 2048, 00:13:38.692 "data_size": 63488 00:13:38.692 } 00:13:38.692 ] 00:13:38.692 }' 00:13:38.692 09:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.951 [2024-11-15 09:32:27.171358] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:38.951 09:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.951 09:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.951 09:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.951 09:32:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:39.211 127.00 IOPS, 381.00 MiB/s [2024-11-15T09:32:27.674Z] [2024-11-15 09:32:27.584116] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:39.471 [2024-11-15 09:32:27.918566] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:40.042 09:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:40.042 09:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.042 09:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.042 09:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.042 09:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.042 09:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.042 09:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.042 09:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.042 09:32:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.042 09:32:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.042 09:32:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.042 09:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.042 "name": "raid_bdev1", 00:13:40.042 "uuid": "c525c8af-f719-4711-9ef4-8eadee689b06", 00:13:40.042 "strip_size_kb": 0, 00:13:40.042 "state": "online", 00:13:40.042 "raid_level": "raid1", 00:13:40.042 "superblock": true, 00:13:40.042 "num_base_bdevs": 2, 00:13:40.042 "num_base_bdevs_discovered": 2, 00:13:40.042 "num_base_bdevs_operational": 2, 00:13:40.042 "process": { 00:13:40.042 "type": "rebuild", 00:13:40.042 "target": "spare", 00:13:40.042 "progress": { 00:13:40.042 "blocks": 51200, 00:13:40.042 "percent": 80 00:13:40.042 } 00:13:40.042 }, 00:13:40.042 "base_bdevs_list": [ 00:13:40.042 { 00:13:40.042 "name": "spare", 00:13:40.042 "uuid": "ca21754b-9046-52c2-b9e8-9d906888756e", 00:13:40.042 "is_configured": true, 00:13:40.042 "data_offset": 2048, 00:13:40.042 "data_size": 63488 00:13:40.042 }, 00:13:40.042 { 00:13:40.042 "name": "BaseBdev2", 00:13:40.042 "uuid": "dffb1b56-4f61-53df-a04d-87b9c041ba47", 00:13:40.042 "is_configured": true, 00:13:40.042 "data_offset": 2048, 00:13:40.042 "data_size": 63488 00:13:40.042 } 00:13:40.042 ] 00:13:40.042 }' 00:13:40.042 09:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.042 [2024-11-15 09:32:28.323130] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:40.042 09:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.042 09:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.042 09:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.042 09:32:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:40.565 111.83 IOPS, 335.50 MiB/s [2024-11-15T09:32:29.028Z] [2024-11-15 09:32:28.862235] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:40.565 [2024-11-15 09:32:28.896387] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:40.565 [2024-11-15 09:32:28.898759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.134 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.135 "name": "raid_bdev1", 00:13:41.135 "uuid": "c525c8af-f719-4711-9ef4-8eadee689b06", 00:13:41.135 "strip_size_kb": 0, 00:13:41.135 "state": "online", 00:13:41.135 "raid_level": "raid1", 00:13:41.135 "superblock": true, 00:13:41.135 "num_base_bdevs": 2, 00:13:41.135 "num_base_bdevs_discovered": 2, 00:13:41.135 "num_base_bdevs_operational": 2, 00:13:41.135 "base_bdevs_list": [ 00:13:41.135 { 00:13:41.135 "name": "spare", 00:13:41.135 "uuid": "ca21754b-9046-52c2-b9e8-9d906888756e", 00:13:41.135 "is_configured": true, 00:13:41.135 "data_offset": 2048, 00:13:41.135 "data_size": 63488 00:13:41.135 }, 00:13:41.135 { 00:13:41.135 "name": "BaseBdev2", 00:13:41.135 "uuid": "dffb1b56-4f61-53df-a04d-87b9c041ba47", 00:13:41.135 "is_configured": true, 00:13:41.135 "data_offset": 2048, 00:13:41.135 "data_size": 63488 00:13:41.135 } 00:13:41.135 ] 00:13:41.135 }' 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.135 100.43 IOPS, 301.29 MiB/s [2024-11-15T09:32:29.598Z] 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.135 "name": "raid_bdev1", 00:13:41.135 "uuid": "c525c8af-f719-4711-9ef4-8eadee689b06", 00:13:41.135 "strip_size_kb": 0, 00:13:41.135 "state": "online", 00:13:41.135 "raid_level": "raid1", 00:13:41.135 "superblock": true, 00:13:41.135 "num_base_bdevs": 2, 00:13:41.135 "num_base_bdevs_discovered": 2, 00:13:41.135 "num_base_bdevs_operational": 2, 00:13:41.135 "base_bdevs_list": [ 00:13:41.135 { 00:13:41.135 "name": "spare", 00:13:41.135 "uuid": "ca21754b-9046-52c2-b9e8-9d906888756e", 00:13:41.135 "is_configured": true, 00:13:41.135 "data_offset": 2048, 00:13:41.135 "data_size": 63488 00:13:41.135 }, 00:13:41.135 { 00:13:41.135 "name": "BaseBdev2", 00:13:41.135 "uuid": "dffb1b56-4f61-53df-a04d-87b9c041ba47", 00:13:41.135 "is_configured": true, 00:13:41.135 "data_offset": 2048, 00:13:41.135 "data_size": 63488 00:13:41.135 } 00:13:41.135 ] 00:13:41.135 }' 00:13:41.135 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.395 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:41.395 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.395 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:41.395 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:41.395 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.395 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.395 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.395 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.395 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:41.395 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.395 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.395 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.395 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.395 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.395 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.395 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.395 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.395 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.395 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.395 "name": "raid_bdev1", 00:13:41.395 "uuid": "c525c8af-f719-4711-9ef4-8eadee689b06", 00:13:41.395 "strip_size_kb": 0, 00:13:41.395 "state": "online", 00:13:41.395 "raid_level": "raid1", 00:13:41.395 "superblock": true, 00:13:41.395 "num_base_bdevs": 2, 00:13:41.395 "num_base_bdevs_discovered": 2, 00:13:41.395 "num_base_bdevs_operational": 2, 00:13:41.395 "base_bdevs_list": [ 00:13:41.395 { 00:13:41.395 "name": "spare", 00:13:41.395 "uuid": "ca21754b-9046-52c2-b9e8-9d906888756e", 00:13:41.395 "is_configured": true, 00:13:41.395 "data_offset": 2048, 00:13:41.395 "data_size": 63488 00:13:41.395 }, 00:13:41.395 { 00:13:41.395 "name": "BaseBdev2", 00:13:41.395 "uuid": "dffb1b56-4f61-53df-a04d-87b9c041ba47", 00:13:41.395 "is_configured": true, 00:13:41.395 "data_offset": 2048, 00:13:41.395 "data_size": 63488 00:13:41.395 } 00:13:41.395 ] 00:13:41.395 }' 00:13:41.395 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.395 09:32:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.965 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:41.965 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.965 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.965 [2024-11-15 09:32:30.132679] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:41.965 [2024-11-15 09:32:30.132723] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:41.965 00:13:41.965 Latency(us) 00:13:41.965 [2024-11-15T09:32:30.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.965 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:41.965 raid_bdev1 : 7.68 94.30 282.91 0.00 0.00 14521.65 311.22 116762.83 00:13:41.965 [2024-11-15T09:32:30.428Z] =================================================================================================================== 00:13:41.965 [2024-11-15T09:32:30.428Z] Total : 94.30 282.91 0.00 0.00 14521.65 311.22 116762.83 00:13:41.965 [2024-11-15 09:32:30.214931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.965 [2024-11-15 09:32:30.214985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:41.965 [2024-11-15 09:32:30.215074] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:41.965 [2024-11-15 09:32:30.215088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:41.965 { 00:13:41.965 "results": [ 00:13:41.965 { 00:13:41.965 "job": "raid_bdev1", 00:13:41.965 "core_mask": "0x1", 00:13:41.965 "workload": "randrw", 00:13:41.965 "percentage": 50, 00:13:41.965 "status": "finished", 00:13:41.965 "queue_depth": 2, 00:13:41.965 "io_size": 3145728, 00:13:41.965 "runtime": 7.677411, 00:13:41.965 "iops": 94.30262363184673, 00:13:41.965 "mibps": 282.9078708955402, 00:13:41.965 "io_failed": 0, 00:13:41.965 "io_timeout": 0, 00:13:41.965 "avg_latency_us": 14521.651108591283, 00:13:41.965 "min_latency_us": 311.22445414847164, 00:13:41.965 "max_latency_us": 116762.82969432314 00:13:41.965 } 00:13:41.965 ], 00:13:41.965 "core_count": 1 00:13:41.965 } 00:13:41.965 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.965 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.965 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:41.965 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.965 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.965 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.965 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:41.965 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:41.965 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:41.965 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:41.965 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:41.965 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:41.965 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:41.965 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:41.965 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:41.965 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:41.965 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:41.965 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:41.965 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:42.225 /dev/nbd0 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:42.225 1+0 records in 00:13:42.225 1+0 records out 00:13:42.225 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508322 s, 8.1 MB/s 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:42.225 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:42.226 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:42.226 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:42.226 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:42.485 /dev/nbd1 00:13:42.485 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:42.485 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:42.485 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:42.485 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:13:42.485 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:42.485 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:42.485 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:42.485 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:13:42.485 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:42.485 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:42.485 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:42.485 1+0 records in 00:13:42.485 1+0 records out 00:13:42.485 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276116 s, 14.8 MB/s 00:13:42.485 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.485 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:13:42.485 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.485 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:42.485 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:13:42.485 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:42.486 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:42.486 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:42.745 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:42.745 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:42.745 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:42.745 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:42.745 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:42.745 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.745 09:32:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:43.004 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:43.004 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:43.004 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:43.005 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.005 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.005 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:43.005 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:43.005 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.005 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:43.005 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.005 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:43.005 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:43.005 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:43.005 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.005 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.265 [2024-11-15 09:32:31.507208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:43.265 [2024-11-15 09:32:31.507269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.265 [2024-11-15 09:32:31.507289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:43.265 [2024-11-15 09:32:31.507300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.265 [2024-11-15 09:32:31.509679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.265 [2024-11-15 09:32:31.509719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:43.265 [2024-11-15 09:32:31.509810] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:43.265 [2024-11-15 09:32:31.509877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:43.265 [2024-11-15 09:32:31.510038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:43.265 spare 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.265 [2024-11-15 09:32:31.609969] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:43.265 [2024-11-15 09:32:31.610019] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:43.265 [2024-11-15 09:32:31.610380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:43.265 [2024-11-15 09:32:31.610624] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:43.265 [2024-11-15 09:32:31.610647] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:43.265 [2024-11-15 09:32:31.610881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.265 "name": "raid_bdev1", 00:13:43.265 "uuid": "c525c8af-f719-4711-9ef4-8eadee689b06", 00:13:43.265 "strip_size_kb": 0, 00:13:43.265 "state": "online", 00:13:43.265 "raid_level": "raid1", 00:13:43.265 "superblock": true, 00:13:43.265 "num_base_bdevs": 2, 00:13:43.265 "num_base_bdevs_discovered": 2, 00:13:43.265 "num_base_bdevs_operational": 2, 00:13:43.265 "base_bdevs_list": [ 00:13:43.265 { 00:13:43.265 "name": "spare", 00:13:43.265 "uuid": "ca21754b-9046-52c2-b9e8-9d906888756e", 00:13:43.265 "is_configured": true, 00:13:43.265 "data_offset": 2048, 00:13:43.265 "data_size": 63488 00:13:43.265 }, 00:13:43.265 { 00:13:43.265 "name": "BaseBdev2", 00:13:43.265 "uuid": "dffb1b56-4f61-53df-a04d-87b9c041ba47", 00:13:43.265 "is_configured": true, 00:13:43.265 "data_offset": 2048, 00:13:43.265 "data_size": 63488 00:13:43.265 } 00:13:43.265 ] 00:13:43.265 }' 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.265 09:32:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.835 "name": "raid_bdev1", 00:13:43.835 "uuid": "c525c8af-f719-4711-9ef4-8eadee689b06", 00:13:43.835 "strip_size_kb": 0, 00:13:43.835 "state": "online", 00:13:43.835 "raid_level": "raid1", 00:13:43.835 "superblock": true, 00:13:43.835 "num_base_bdevs": 2, 00:13:43.835 "num_base_bdevs_discovered": 2, 00:13:43.835 "num_base_bdevs_operational": 2, 00:13:43.835 "base_bdevs_list": [ 00:13:43.835 { 00:13:43.835 "name": "spare", 00:13:43.835 "uuid": "ca21754b-9046-52c2-b9e8-9d906888756e", 00:13:43.835 "is_configured": true, 00:13:43.835 "data_offset": 2048, 00:13:43.835 "data_size": 63488 00:13:43.835 }, 00:13:43.835 { 00:13:43.835 "name": "BaseBdev2", 00:13:43.835 "uuid": "dffb1b56-4f61-53df-a04d-87b9c041ba47", 00:13:43.835 "is_configured": true, 00:13:43.835 "data_offset": 2048, 00:13:43.835 "data_size": 63488 00:13:43.835 } 00:13:43.835 ] 00:13:43.835 }' 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.835 [2024-11-15 09:32:32.218175] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.835 "name": "raid_bdev1", 00:13:43.835 "uuid": "c525c8af-f719-4711-9ef4-8eadee689b06", 00:13:43.835 "strip_size_kb": 0, 00:13:43.835 "state": "online", 00:13:43.835 "raid_level": "raid1", 00:13:43.835 "superblock": true, 00:13:43.835 "num_base_bdevs": 2, 00:13:43.835 "num_base_bdevs_discovered": 1, 00:13:43.835 "num_base_bdevs_operational": 1, 00:13:43.835 "base_bdevs_list": [ 00:13:43.835 { 00:13:43.835 "name": null, 00:13:43.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.835 "is_configured": false, 00:13:43.835 "data_offset": 0, 00:13:43.835 "data_size": 63488 00:13:43.835 }, 00:13:43.835 { 00:13:43.835 "name": "BaseBdev2", 00:13:43.835 "uuid": "dffb1b56-4f61-53df-a04d-87b9c041ba47", 00:13:43.835 "is_configured": true, 00:13:43.835 "data_offset": 2048, 00:13:43.835 "data_size": 63488 00:13:43.835 } 00:13:43.835 ] 00:13:43.835 }' 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.835 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.405 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:44.405 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.405 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.405 [2024-11-15 09:32:32.725412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:44.405 [2024-11-15 09:32:32.725630] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:44.405 [2024-11-15 09:32:32.725644] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:44.405 [2024-11-15 09:32:32.725685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:44.405 [2024-11-15 09:32:32.741695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:44.405 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.405 09:32:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:44.405 [2024-11-15 09:32:32.743611] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:45.414 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.414 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.414 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.414 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.414 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.414 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.414 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.414 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.414 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.414 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.414 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.414 "name": "raid_bdev1", 00:13:45.414 "uuid": "c525c8af-f719-4711-9ef4-8eadee689b06", 00:13:45.414 "strip_size_kb": 0, 00:13:45.414 "state": "online", 00:13:45.414 "raid_level": "raid1", 00:13:45.414 "superblock": true, 00:13:45.414 "num_base_bdevs": 2, 00:13:45.414 "num_base_bdevs_discovered": 2, 00:13:45.414 "num_base_bdevs_operational": 2, 00:13:45.414 "process": { 00:13:45.414 "type": "rebuild", 00:13:45.414 "target": "spare", 00:13:45.414 "progress": { 00:13:45.414 "blocks": 20480, 00:13:45.414 "percent": 32 00:13:45.414 } 00:13:45.414 }, 00:13:45.414 "base_bdevs_list": [ 00:13:45.414 { 00:13:45.414 "name": "spare", 00:13:45.414 "uuid": "ca21754b-9046-52c2-b9e8-9d906888756e", 00:13:45.414 "is_configured": true, 00:13:45.414 "data_offset": 2048, 00:13:45.414 "data_size": 63488 00:13:45.414 }, 00:13:45.414 { 00:13:45.414 "name": "BaseBdev2", 00:13:45.414 "uuid": "dffb1b56-4f61-53df-a04d-87b9c041ba47", 00:13:45.414 "is_configured": true, 00:13:45.414 "data_offset": 2048, 00:13:45.414 "data_size": 63488 00:13:45.414 } 00:13:45.414 ] 00:13:45.414 }' 00:13:45.414 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.414 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.414 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.414 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.414 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:45.414 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.414 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.674 [2024-11-15 09:32:33.884110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.674 [2024-11-15 09:32:33.949631] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:45.674 [2024-11-15 09:32:33.949700] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.674 [2024-11-15 09:32:33.949717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.674 [2024-11-15 09:32:33.949724] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:45.674 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.674 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:45.674 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.674 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.674 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.674 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.674 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:45.674 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.674 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.674 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.674 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.674 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.674 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.674 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.674 09:32:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.674 09:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.674 09:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.674 "name": "raid_bdev1", 00:13:45.674 "uuid": "c525c8af-f719-4711-9ef4-8eadee689b06", 00:13:45.674 "strip_size_kb": 0, 00:13:45.674 "state": "online", 00:13:45.674 "raid_level": "raid1", 00:13:45.674 "superblock": true, 00:13:45.674 "num_base_bdevs": 2, 00:13:45.674 "num_base_bdevs_discovered": 1, 00:13:45.674 "num_base_bdevs_operational": 1, 00:13:45.674 "base_bdevs_list": [ 00:13:45.674 { 00:13:45.674 "name": null, 00:13:45.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.674 "is_configured": false, 00:13:45.674 "data_offset": 0, 00:13:45.674 "data_size": 63488 00:13:45.674 }, 00:13:45.674 { 00:13:45.674 "name": "BaseBdev2", 00:13:45.674 "uuid": "dffb1b56-4f61-53df-a04d-87b9c041ba47", 00:13:45.674 "is_configured": true, 00:13:45.674 "data_offset": 2048, 00:13:45.674 "data_size": 63488 00:13:45.674 } 00:13:45.674 ] 00:13:45.674 }' 00:13:45.674 09:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.674 09:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.963 09:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:45.963 09:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.963 09:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.963 [2024-11-15 09:32:34.416263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:45.963 [2024-11-15 09:32:34.416345] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.963 [2024-11-15 09:32:34.416375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:45.963 [2024-11-15 09:32:34.416387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.963 [2024-11-15 09:32:34.416926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.963 [2024-11-15 09:32:34.416953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:45.963 [2024-11-15 09:32:34.417065] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:45.963 [2024-11-15 09:32:34.417087] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:45.963 [2024-11-15 09:32:34.417104] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:45.963 [2024-11-15 09:32:34.417127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:46.222 [2024-11-15 09:32:34.434102] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:46.222 spare 00:13:46.222 09:32:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.222 [2024-11-15 09:32:34.436024] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:46.222 09:32:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:47.171 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.171 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.171 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.171 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.171 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.171 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.171 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.171 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.171 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.171 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.171 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.171 "name": "raid_bdev1", 00:13:47.171 "uuid": "c525c8af-f719-4711-9ef4-8eadee689b06", 00:13:47.171 "strip_size_kb": 0, 00:13:47.171 "state": "online", 00:13:47.171 "raid_level": "raid1", 00:13:47.171 "superblock": true, 00:13:47.171 "num_base_bdevs": 2, 00:13:47.171 "num_base_bdevs_discovered": 2, 00:13:47.171 "num_base_bdevs_operational": 2, 00:13:47.171 "process": { 00:13:47.171 "type": "rebuild", 00:13:47.171 "target": "spare", 00:13:47.171 "progress": { 00:13:47.171 "blocks": 20480, 00:13:47.171 "percent": 32 00:13:47.171 } 00:13:47.171 }, 00:13:47.171 "base_bdevs_list": [ 00:13:47.171 { 00:13:47.171 "name": "spare", 00:13:47.171 "uuid": "ca21754b-9046-52c2-b9e8-9d906888756e", 00:13:47.171 "is_configured": true, 00:13:47.171 "data_offset": 2048, 00:13:47.171 "data_size": 63488 00:13:47.171 }, 00:13:47.171 { 00:13:47.171 "name": "BaseBdev2", 00:13:47.171 "uuid": "dffb1b56-4f61-53df-a04d-87b9c041ba47", 00:13:47.171 "is_configured": true, 00:13:47.171 "data_offset": 2048, 00:13:47.171 "data_size": 63488 00:13:47.171 } 00:13:47.171 ] 00:13:47.171 }' 00:13:47.171 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.171 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.171 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.171 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.171 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:47.171 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.171 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.171 [2024-11-15 09:32:35.596156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.431 [2024-11-15 09:32:35.641737] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:47.431 [2024-11-15 09:32:35.641860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.431 [2024-11-15 09:32:35.641885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.431 [2024-11-15 09:32:35.641898] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:47.431 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.431 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:47.431 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.431 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.431 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.431 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.431 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:47.431 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.431 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.431 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.431 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.431 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.431 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.432 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.432 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.432 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.432 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.432 "name": "raid_bdev1", 00:13:47.432 "uuid": "c525c8af-f719-4711-9ef4-8eadee689b06", 00:13:47.432 "strip_size_kb": 0, 00:13:47.432 "state": "online", 00:13:47.432 "raid_level": "raid1", 00:13:47.432 "superblock": true, 00:13:47.432 "num_base_bdevs": 2, 00:13:47.432 "num_base_bdevs_discovered": 1, 00:13:47.432 "num_base_bdevs_operational": 1, 00:13:47.432 "base_bdevs_list": [ 00:13:47.432 { 00:13:47.432 "name": null, 00:13:47.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.432 "is_configured": false, 00:13:47.432 "data_offset": 0, 00:13:47.432 "data_size": 63488 00:13:47.432 }, 00:13:47.432 { 00:13:47.432 "name": "BaseBdev2", 00:13:47.432 "uuid": "dffb1b56-4f61-53df-a04d-87b9c041ba47", 00:13:47.432 "is_configured": true, 00:13:47.432 "data_offset": 2048, 00:13:47.432 "data_size": 63488 00:13:47.432 } 00:13:47.432 ] 00:13:47.432 }' 00:13:47.432 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.432 09:32:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.691 09:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:47.691 09:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.691 09:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:47.691 09:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:47.691 09:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.691 09:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.691 09:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.691 09:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.691 09:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.691 09:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.951 09:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.951 "name": "raid_bdev1", 00:13:47.951 "uuid": "c525c8af-f719-4711-9ef4-8eadee689b06", 00:13:47.951 "strip_size_kb": 0, 00:13:47.951 "state": "online", 00:13:47.951 "raid_level": "raid1", 00:13:47.951 "superblock": true, 00:13:47.951 "num_base_bdevs": 2, 00:13:47.951 "num_base_bdevs_discovered": 1, 00:13:47.951 "num_base_bdevs_operational": 1, 00:13:47.951 "base_bdevs_list": [ 00:13:47.951 { 00:13:47.951 "name": null, 00:13:47.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.951 "is_configured": false, 00:13:47.951 "data_offset": 0, 00:13:47.951 "data_size": 63488 00:13:47.951 }, 00:13:47.951 { 00:13:47.951 "name": "BaseBdev2", 00:13:47.951 "uuid": "dffb1b56-4f61-53df-a04d-87b9c041ba47", 00:13:47.951 "is_configured": true, 00:13:47.951 "data_offset": 2048, 00:13:47.951 "data_size": 63488 00:13:47.951 } 00:13:47.951 ] 00:13:47.951 }' 00:13:47.951 09:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.951 09:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:47.951 09:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.951 09:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:47.951 09:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:47.951 09:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.951 09:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.951 09:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.951 09:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:47.951 09:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.951 09:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.951 [2024-11-15 09:32:36.256232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:47.951 [2024-11-15 09:32:36.256298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.951 [2024-11-15 09:32:36.256322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:47.951 [2024-11-15 09:32:36.256333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.951 [2024-11-15 09:32:36.256873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.951 [2024-11-15 09:32:36.256908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:47.951 [2024-11-15 09:32:36.257000] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:47.951 [2024-11-15 09:32:36.257020] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:47.951 [2024-11-15 09:32:36.257029] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:47.951 [2024-11-15 09:32:36.257044] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:47.951 BaseBdev1 00:13:47.951 09:32:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.951 09:32:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:48.892 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:48.892 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.892 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.892 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.892 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.892 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:48.892 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.892 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.892 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.892 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.892 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.892 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.892 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.892 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.892 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.892 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.892 "name": "raid_bdev1", 00:13:48.892 "uuid": "c525c8af-f719-4711-9ef4-8eadee689b06", 00:13:48.892 "strip_size_kb": 0, 00:13:48.892 "state": "online", 00:13:48.892 "raid_level": "raid1", 00:13:48.892 "superblock": true, 00:13:48.892 "num_base_bdevs": 2, 00:13:48.892 "num_base_bdevs_discovered": 1, 00:13:48.892 "num_base_bdevs_operational": 1, 00:13:48.892 "base_bdevs_list": [ 00:13:48.892 { 00:13:48.892 "name": null, 00:13:48.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.892 "is_configured": false, 00:13:48.892 "data_offset": 0, 00:13:48.892 "data_size": 63488 00:13:48.892 }, 00:13:48.892 { 00:13:48.892 "name": "BaseBdev2", 00:13:48.892 "uuid": "dffb1b56-4f61-53df-a04d-87b9c041ba47", 00:13:48.892 "is_configured": true, 00:13:48.892 "data_offset": 2048, 00:13:48.892 "data_size": 63488 00:13:48.892 } 00:13:48.892 ] 00:13:48.892 }' 00:13:48.893 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.893 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.463 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:49.463 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.463 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:49.463 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:49.463 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.463 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.463 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.463 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.463 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.463 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.463 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.463 "name": "raid_bdev1", 00:13:49.463 "uuid": "c525c8af-f719-4711-9ef4-8eadee689b06", 00:13:49.463 "strip_size_kb": 0, 00:13:49.463 "state": "online", 00:13:49.463 "raid_level": "raid1", 00:13:49.463 "superblock": true, 00:13:49.463 "num_base_bdevs": 2, 00:13:49.463 "num_base_bdevs_discovered": 1, 00:13:49.463 "num_base_bdevs_operational": 1, 00:13:49.463 "base_bdevs_list": [ 00:13:49.463 { 00:13:49.463 "name": null, 00:13:49.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.463 "is_configured": false, 00:13:49.463 "data_offset": 0, 00:13:49.463 "data_size": 63488 00:13:49.463 }, 00:13:49.463 { 00:13:49.463 "name": "BaseBdev2", 00:13:49.463 "uuid": "dffb1b56-4f61-53df-a04d-87b9c041ba47", 00:13:49.463 "is_configured": true, 00:13:49.463 "data_offset": 2048, 00:13:49.463 "data_size": 63488 00:13:49.463 } 00:13:49.463 ] 00:13:49.463 }' 00:13:49.463 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.463 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:49.463 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.463 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:49.463 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:49.463 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:13:49.463 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:49.463 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:49.463 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:49.464 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:49.464 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:49.464 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:49.464 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.464 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.464 [2024-11-15 09:32:37.881743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:49.464 [2024-11-15 09:32:37.882001] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:49.464 [2024-11-15 09:32:37.882062] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:49.464 request: 00:13:49.464 { 00:13:49.464 "base_bdev": "BaseBdev1", 00:13:49.464 "raid_bdev": "raid_bdev1", 00:13:49.464 "method": "bdev_raid_add_base_bdev", 00:13:49.464 "req_id": 1 00:13:49.464 } 00:13:49.464 Got JSON-RPC error response 00:13:49.464 response: 00:13:49.464 { 00:13:49.464 "code": -22, 00:13:49.464 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:49.464 } 00:13:49.464 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:49.464 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:13:49.464 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:49.464 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:49.464 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:49.464 09:32:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:50.842 09:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:50.842 09:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.842 09:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.842 09:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.842 09:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.842 09:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:50.842 09:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.843 09:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.843 09:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.843 09:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.843 09:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.843 09:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.843 09:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.843 09:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.843 09:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.843 09:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.843 "name": "raid_bdev1", 00:13:50.843 "uuid": "c525c8af-f719-4711-9ef4-8eadee689b06", 00:13:50.843 "strip_size_kb": 0, 00:13:50.843 "state": "online", 00:13:50.843 "raid_level": "raid1", 00:13:50.843 "superblock": true, 00:13:50.843 "num_base_bdevs": 2, 00:13:50.843 "num_base_bdevs_discovered": 1, 00:13:50.843 "num_base_bdevs_operational": 1, 00:13:50.843 "base_bdevs_list": [ 00:13:50.843 { 00:13:50.843 "name": null, 00:13:50.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.843 "is_configured": false, 00:13:50.843 "data_offset": 0, 00:13:50.843 "data_size": 63488 00:13:50.843 }, 00:13:50.843 { 00:13:50.843 "name": "BaseBdev2", 00:13:50.843 "uuid": "dffb1b56-4f61-53df-a04d-87b9c041ba47", 00:13:50.843 "is_configured": true, 00:13:50.843 "data_offset": 2048, 00:13:50.843 "data_size": 63488 00:13:50.843 } 00:13:50.843 ] 00:13:50.843 }' 00:13:50.843 09:32:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.843 09:32:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.103 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:51.103 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.103 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:51.103 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:51.103 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.103 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.103 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.103 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.103 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.103 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.103 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.103 "name": "raid_bdev1", 00:13:51.103 "uuid": "c525c8af-f719-4711-9ef4-8eadee689b06", 00:13:51.103 "strip_size_kb": 0, 00:13:51.103 "state": "online", 00:13:51.104 "raid_level": "raid1", 00:13:51.104 "superblock": true, 00:13:51.104 "num_base_bdevs": 2, 00:13:51.104 "num_base_bdevs_discovered": 1, 00:13:51.104 "num_base_bdevs_operational": 1, 00:13:51.104 "base_bdevs_list": [ 00:13:51.104 { 00:13:51.104 "name": null, 00:13:51.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.104 "is_configured": false, 00:13:51.104 "data_offset": 0, 00:13:51.104 "data_size": 63488 00:13:51.104 }, 00:13:51.104 { 00:13:51.104 "name": "BaseBdev2", 00:13:51.104 "uuid": "dffb1b56-4f61-53df-a04d-87b9c041ba47", 00:13:51.104 "is_configured": true, 00:13:51.104 "data_offset": 2048, 00:13:51.104 "data_size": 63488 00:13:51.104 } 00:13:51.104 ] 00:13:51.104 }' 00:13:51.104 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.104 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:51.104 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.104 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:51.104 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77255 00:13:51.104 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 77255 ']' 00:13:51.104 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 77255 00:13:51.104 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:13:51.104 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:51.104 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77255 00:13:51.104 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:51.104 killing process with pid 77255 00:13:51.104 Received shutdown signal, test time was about 17.033039 seconds 00:13:51.104 00:13:51.104 Latency(us) 00:13:51.104 [2024-11-15T09:32:39.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.104 [2024-11-15T09:32:39.567Z] =================================================================================================================== 00:13:51.104 [2024-11-15T09:32:39.567Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:51.104 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:51.104 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77255' 00:13:51.104 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 77255 00:13:51.104 [2024-11-15 09:32:39.530103] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:51.104 [2024-11-15 09:32:39.530234] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:51.104 09:32:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 77255 00:13:51.104 [2024-11-15 09:32:39.530292] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:51.104 [2024-11-15 09:32:39.530301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:51.364 [2024-11-15 09:32:39.768261] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:52.808 09:32:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:52.808 00:13:52.808 real 0m20.239s 00:13:52.808 user 0m26.407s 00:13:52.808 sys 0m2.288s 00:13:52.808 09:32:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:52.808 09:32:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.808 ************************************ 00:13:52.808 END TEST raid_rebuild_test_sb_io 00:13:52.808 ************************************ 00:13:52.808 09:32:41 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:52.808 09:32:41 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:52.808 09:32:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:52.808 09:32:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:52.808 09:32:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:52.808 ************************************ 00:13:52.808 START TEST raid_rebuild_test 00:13:52.808 ************************************ 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false false true 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77944 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77944 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 77944 ']' 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.808 09:32:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:52.809 09:32:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.809 09:32:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:52.809 09:32:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.809 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:52.809 Zero copy mechanism will not be used. 00:13:52.809 [2024-11-15 09:32:41.173828] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:13:52.809 [2024-11-15 09:32:41.174001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77944 ] 00:13:53.068 [2024-11-15 09:32:41.363574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.068 [2024-11-15 09:32:41.480733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.328 [2024-11-15 09:32:41.687243] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.328 [2024-11-15 09:32:41.687297] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.588 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:53.588 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:13:53.588 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:53.588 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:53.588 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.588 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.848 BaseBdev1_malloc 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.848 [2024-11-15 09:32:42.060917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:53.848 [2024-11-15 09:32:42.060996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.848 [2024-11-15 09:32:42.061026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:53.848 [2024-11-15 09:32:42.061040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.848 [2024-11-15 09:32:42.063267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.848 [2024-11-15 09:32:42.063308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:53.848 BaseBdev1 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.848 BaseBdev2_malloc 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.848 [2024-11-15 09:32:42.117095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:53.848 [2024-11-15 09:32:42.117247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.848 [2024-11-15 09:32:42.117276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:53.848 [2024-11-15 09:32:42.117291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.848 [2024-11-15 09:32:42.119729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.848 [2024-11-15 09:32:42.119772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:53.848 BaseBdev2 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.848 BaseBdev3_malloc 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.848 [2024-11-15 09:32:42.186412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:53.848 [2024-11-15 09:32:42.186515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.848 [2024-11-15 09:32:42.186542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:53.848 [2024-11-15 09:32:42.186554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.848 [2024-11-15 09:32:42.188892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.848 [2024-11-15 09:32:42.188935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:53.848 BaseBdev3 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.848 BaseBdev4_malloc 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.848 [2024-11-15 09:32:42.242698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:53.848 [2024-11-15 09:32:42.242754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.848 [2024-11-15 09:32:42.242772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:53.848 [2024-11-15 09:32:42.242783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.848 [2024-11-15 09:32:42.244903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.848 [2024-11-15 09:32:42.244944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:53.848 BaseBdev4 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.848 spare_malloc 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.848 spare_delay 00:13:53.848 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.849 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:53.849 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.849 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.108 [2024-11-15 09:32:42.312330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:54.108 [2024-11-15 09:32:42.312394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.108 [2024-11-15 09:32:42.312417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:54.108 [2024-11-15 09:32:42.312429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.108 [2024-11-15 09:32:42.314659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.108 [2024-11-15 09:32:42.314701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:54.108 spare 00:13:54.108 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.108 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:54.108 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.108 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.108 [2024-11-15 09:32:42.324354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.108 [2024-11-15 09:32:42.326228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.108 [2024-11-15 09:32:42.326294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:54.108 [2024-11-15 09:32:42.326347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:54.108 [2024-11-15 09:32:42.326426] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:54.109 [2024-11-15 09:32:42.326439] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:54.109 [2024-11-15 09:32:42.326684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:54.109 [2024-11-15 09:32:42.326895] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:54.109 [2024-11-15 09:32:42.326909] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:54.109 [2024-11-15 09:32:42.327074] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.109 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.109 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:54.109 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.109 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.109 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.109 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.109 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.109 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.109 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.109 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.109 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.109 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.109 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.109 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.109 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.109 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.109 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.109 "name": "raid_bdev1", 00:13:54.109 "uuid": "bd6a150b-bfce-4a61-8061-81b6e6363cb9", 00:13:54.109 "strip_size_kb": 0, 00:13:54.109 "state": "online", 00:13:54.109 "raid_level": "raid1", 00:13:54.109 "superblock": false, 00:13:54.109 "num_base_bdevs": 4, 00:13:54.109 "num_base_bdevs_discovered": 4, 00:13:54.109 "num_base_bdevs_operational": 4, 00:13:54.109 "base_bdevs_list": [ 00:13:54.109 { 00:13:54.109 "name": "BaseBdev1", 00:13:54.109 "uuid": "af358a62-131c-5b73-b57f-0ccd041f21d8", 00:13:54.109 "is_configured": true, 00:13:54.109 "data_offset": 0, 00:13:54.109 "data_size": 65536 00:13:54.109 }, 00:13:54.109 { 00:13:54.109 "name": "BaseBdev2", 00:13:54.109 "uuid": "fd0a5709-5092-53f1-8089-05be83bdaf56", 00:13:54.109 "is_configured": true, 00:13:54.109 "data_offset": 0, 00:13:54.109 "data_size": 65536 00:13:54.109 }, 00:13:54.109 { 00:13:54.109 "name": "BaseBdev3", 00:13:54.109 "uuid": "dd2c658f-b967-543e-a088-baefc26df4bc", 00:13:54.109 "is_configured": true, 00:13:54.109 "data_offset": 0, 00:13:54.109 "data_size": 65536 00:13:54.109 }, 00:13:54.109 { 00:13:54.109 "name": "BaseBdev4", 00:13:54.109 "uuid": "ed39d074-3fb9-58df-8ee9-769e1275aaa5", 00:13:54.109 "is_configured": true, 00:13:54.109 "data_offset": 0, 00:13:54.109 "data_size": 65536 00:13:54.109 } 00:13:54.109 ] 00:13:54.109 }' 00:13:54.109 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.109 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.369 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:54.369 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:54.369 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.369 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.369 [2024-11-15 09:32:42.815929] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:54.634 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.634 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:54.634 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.634 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.634 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.634 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:54.634 09:32:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.634 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:54.634 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:54.634 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:54.634 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:54.634 09:32:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:54.634 09:32:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:54.634 09:32:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:54.634 09:32:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:54.634 09:32:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:54.634 09:32:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:54.634 09:32:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:54.634 09:32:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:54.634 09:32:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:54.634 09:32:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:54.906 [2024-11-15 09:32:43.127119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:54.906 /dev/nbd0 00:13:54.906 09:32:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:54.906 09:32:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:54.906 09:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:54.906 09:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:13:54.906 09:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:54.906 09:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:54.906 09:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:54.906 09:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:13:54.906 09:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:54.906 09:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:54.906 09:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:54.906 1+0 records in 00:13:54.906 1+0 records out 00:13:54.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557964 s, 7.3 MB/s 00:13:54.906 09:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.906 09:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:13:54.906 09:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.906 09:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:54.906 09:32:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:13:54.906 09:32:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:54.906 09:32:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:54.906 09:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:54.906 09:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:54.906 09:32:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:01.475 65536+0 records in 00:14:01.475 65536+0 records out 00:14:01.475 33554432 bytes (34 MB, 32 MiB) copied, 5.59988 s, 6.0 MB/s 00:14:01.475 09:32:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:01.475 09:32:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.475 09:32:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:01.475 09:32:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:01.475 09:32:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:01.475 09:32:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.475 09:32:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:01.475 [2024-11-15 09:32:49.018306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.475 [2024-11-15 09:32:49.035314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.475 "name": "raid_bdev1", 00:14:01.475 "uuid": "bd6a150b-bfce-4a61-8061-81b6e6363cb9", 00:14:01.475 "strip_size_kb": 0, 00:14:01.475 "state": "online", 00:14:01.475 "raid_level": "raid1", 00:14:01.475 "superblock": false, 00:14:01.475 "num_base_bdevs": 4, 00:14:01.475 "num_base_bdevs_discovered": 3, 00:14:01.475 "num_base_bdevs_operational": 3, 00:14:01.475 "base_bdevs_list": [ 00:14:01.475 { 00:14:01.475 "name": null, 00:14:01.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.475 "is_configured": false, 00:14:01.475 "data_offset": 0, 00:14:01.475 "data_size": 65536 00:14:01.475 }, 00:14:01.475 { 00:14:01.475 "name": "BaseBdev2", 00:14:01.475 "uuid": "fd0a5709-5092-53f1-8089-05be83bdaf56", 00:14:01.475 "is_configured": true, 00:14:01.475 "data_offset": 0, 00:14:01.475 "data_size": 65536 00:14:01.475 }, 00:14:01.475 { 00:14:01.475 "name": "BaseBdev3", 00:14:01.475 "uuid": "dd2c658f-b967-543e-a088-baefc26df4bc", 00:14:01.475 "is_configured": true, 00:14:01.475 "data_offset": 0, 00:14:01.475 "data_size": 65536 00:14:01.475 }, 00:14:01.475 { 00:14:01.475 "name": "BaseBdev4", 00:14:01.475 "uuid": "ed39d074-3fb9-58df-8ee9-769e1275aaa5", 00:14:01.475 "is_configured": true, 00:14:01.475 "data_offset": 0, 00:14:01.475 "data_size": 65536 00:14:01.475 } 00:14:01.475 ] 00:14:01.475 }' 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.475 [2024-11-15 09:32:49.454565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:01.475 [2024-11-15 09:32:49.470899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.475 09:32:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:01.475 [2024-11-15 09:32:49.472944] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:02.214 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.214 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.214 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.214 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.214 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.214 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.214 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.214 09:32:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.214 09:32:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.214 09:32:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.214 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.214 "name": "raid_bdev1", 00:14:02.214 "uuid": "bd6a150b-bfce-4a61-8061-81b6e6363cb9", 00:14:02.214 "strip_size_kb": 0, 00:14:02.214 "state": "online", 00:14:02.214 "raid_level": "raid1", 00:14:02.214 "superblock": false, 00:14:02.214 "num_base_bdevs": 4, 00:14:02.214 "num_base_bdevs_discovered": 4, 00:14:02.214 "num_base_bdevs_operational": 4, 00:14:02.214 "process": { 00:14:02.214 "type": "rebuild", 00:14:02.214 "target": "spare", 00:14:02.214 "progress": { 00:14:02.214 "blocks": 20480, 00:14:02.214 "percent": 31 00:14:02.214 } 00:14:02.214 }, 00:14:02.214 "base_bdevs_list": [ 00:14:02.214 { 00:14:02.214 "name": "spare", 00:14:02.214 "uuid": "f2f7e268-c2c5-5340-90fc-b71599bdaa92", 00:14:02.214 "is_configured": true, 00:14:02.214 "data_offset": 0, 00:14:02.214 "data_size": 65536 00:14:02.214 }, 00:14:02.214 { 00:14:02.214 "name": "BaseBdev2", 00:14:02.214 "uuid": "fd0a5709-5092-53f1-8089-05be83bdaf56", 00:14:02.214 "is_configured": true, 00:14:02.214 "data_offset": 0, 00:14:02.214 "data_size": 65536 00:14:02.214 }, 00:14:02.214 { 00:14:02.214 "name": "BaseBdev3", 00:14:02.214 "uuid": "dd2c658f-b967-543e-a088-baefc26df4bc", 00:14:02.214 "is_configured": true, 00:14:02.214 "data_offset": 0, 00:14:02.214 "data_size": 65536 00:14:02.214 }, 00:14:02.214 { 00:14:02.214 "name": "BaseBdev4", 00:14:02.214 "uuid": "ed39d074-3fb9-58df-8ee9-769e1275aaa5", 00:14:02.214 "is_configured": true, 00:14:02.214 "data_offset": 0, 00:14:02.214 "data_size": 65536 00:14:02.214 } 00:14:02.214 ] 00:14:02.214 }' 00:14:02.214 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.214 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.214 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.214 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.214 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:02.214 09:32:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.214 09:32:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.215 [2024-11-15 09:32:50.636376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.215 [2024-11-15 09:32:50.678704] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:02.215 [2024-11-15 09:32:50.678770] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.215 [2024-11-15 09:32:50.678787] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.215 [2024-11-15 09:32:50.678797] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:02.474 09:32:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.474 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:02.474 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.474 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.474 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.474 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.474 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.474 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.474 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.474 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.474 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.474 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.474 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.474 09:32:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.474 09:32:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.474 09:32:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.474 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.474 "name": "raid_bdev1", 00:14:02.474 "uuid": "bd6a150b-bfce-4a61-8061-81b6e6363cb9", 00:14:02.474 "strip_size_kb": 0, 00:14:02.474 "state": "online", 00:14:02.474 "raid_level": "raid1", 00:14:02.474 "superblock": false, 00:14:02.474 "num_base_bdevs": 4, 00:14:02.474 "num_base_bdevs_discovered": 3, 00:14:02.474 "num_base_bdevs_operational": 3, 00:14:02.474 "base_bdevs_list": [ 00:14:02.474 { 00:14:02.474 "name": null, 00:14:02.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.474 "is_configured": false, 00:14:02.474 "data_offset": 0, 00:14:02.474 "data_size": 65536 00:14:02.474 }, 00:14:02.474 { 00:14:02.474 "name": "BaseBdev2", 00:14:02.474 "uuid": "fd0a5709-5092-53f1-8089-05be83bdaf56", 00:14:02.474 "is_configured": true, 00:14:02.474 "data_offset": 0, 00:14:02.474 "data_size": 65536 00:14:02.474 }, 00:14:02.474 { 00:14:02.474 "name": "BaseBdev3", 00:14:02.474 "uuid": "dd2c658f-b967-543e-a088-baefc26df4bc", 00:14:02.474 "is_configured": true, 00:14:02.474 "data_offset": 0, 00:14:02.474 "data_size": 65536 00:14:02.474 }, 00:14:02.474 { 00:14:02.474 "name": "BaseBdev4", 00:14:02.474 "uuid": "ed39d074-3fb9-58df-8ee9-769e1275aaa5", 00:14:02.474 "is_configured": true, 00:14:02.474 "data_offset": 0, 00:14:02.474 "data_size": 65536 00:14:02.474 } 00:14:02.474 ] 00:14:02.474 }' 00:14:02.474 09:32:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.474 09:32:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.734 09:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.734 09:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.734 09:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:02.734 09:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:02.734 09:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.734 09:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.734 09:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.734 09:32:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.734 09:32:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.734 09:32:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.994 09:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.994 "name": "raid_bdev1", 00:14:02.994 "uuid": "bd6a150b-bfce-4a61-8061-81b6e6363cb9", 00:14:02.994 "strip_size_kb": 0, 00:14:02.994 "state": "online", 00:14:02.994 "raid_level": "raid1", 00:14:02.994 "superblock": false, 00:14:02.994 "num_base_bdevs": 4, 00:14:02.994 "num_base_bdevs_discovered": 3, 00:14:02.994 "num_base_bdevs_operational": 3, 00:14:02.994 "base_bdevs_list": [ 00:14:02.994 { 00:14:02.994 "name": null, 00:14:02.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.994 "is_configured": false, 00:14:02.994 "data_offset": 0, 00:14:02.994 "data_size": 65536 00:14:02.994 }, 00:14:02.994 { 00:14:02.994 "name": "BaseBdev2", 00:14:02.994 "uuid": "fd0a5709-5092-53f1-8089-05be83bdaf56", 00:14:02.994 "is_configured": true, 00:14:02.994 "data_offset": 0, 00:14:02.994 "data_size": 65536 00:14:02.994 }, 00:14:02.994 { 00:14:02.994 "name": "BaseBdev3", 00:14:02.994 "uuid": "dd2c658f-b967-543e-a088-baefc26df4bc", 00:14:02.994 "is_configured": true, 00:14:02.994 "data_offset": 0, 00:14:02.994 "data_size": 65536 00:14:02.994 }, 00:14:02.994 { 00:14:02.994 "name": "BaseBdev4", 00:14:02.994 "uuid": "ed39d074-3fb9-58df-8ee9-769e1275aaa5", 00:14:02.994 "is_configured": true, 00:14:02.994 "data_offset": 0, 00:14:02.994 "data_size": 65536 00:14:02.994 } 00:14:02.994 ] 00:14:02.994 }' 00:14:02.994 09:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.994 09:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:02.994 09:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.994 09:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:02.994 09:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:02.994 09:32:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.994 09:32:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.994 [2024-11-15 09:32:51.313579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:02.994 [2024-11-15 09:32:51.329149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:14:02.994 09:32:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.994 09:32:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:02.994 [2024-11-15 09:32:51.331049] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:03.933 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.933 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.933 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.933 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.933 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.933 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.933 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.933 09:32:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.933 09:32:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.933 09:32:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.933 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.933 "name": "raid_bdev1", 00:14:03.933 "uuid": "bd6a150b-bfce-4a61-8061-81b6e6363cb9", 00:14:03.933 "strip_size_kb": 0, 00:14:03.933 "state": "online", 00:14:03.933 "raid_level": "raid1", 00:14:03.933 "superblock": false, 00:14:03.933 "num_base_bdevs": 4, 00:14:03.933 "num_base_bdevs_discovered": 4, 00:14:03.933 "num_base_bdevs_operational": 4, 00:14:03.933 "process": { 00:14:03.933 "type": "rebuild", 00:14:03.933 "target": "spare", 00:14:03.933 "progress": { 00:14:03.933 "blocks": 20480, 00:14:03.933 "percent": 31 00:14:03.933 } 00:14:03.933 }, 00:14:03.933 "base_bdevs_list": [ 00:14:03.933 { 00:14:03.933 "name": "spare", 00:14:03.933 "uuid": "f2f7e268-c2c5-5340-90fc-b71599bdaa92", 00:14:03.933 "is_configured": true, 00:14:03.933 "data_offset": 0, 00:14:03.933 "data_size": 65536 00:14:03.933 }, 00:14:03.933 { 00:14:03.933 "name": "BaseBdev2", 00:14:03.933 "uuid": "fd0a5709-5092-53f1-8089-05be83bdaf56", 00:14:03.933 "is_configured": true, 00:14:03.933 "data_offset": 0, 00:14:03.933 "data_size": 65536 00:14:03.933 }, 00:14:03.933 { 00:14:03.933 "name": "BaseBdev3", 00:14:03.933 "uuid": "dd2c658f-b967-543e-a088-baefc26df4bc", 00:14:03.933 "is_configured": true, 00:14:03.933 "data_offset": 0, 00:14:03.933 "data_size": 65536 00:14:03.933 }, 00:14:03.933 { 00:14:03.933 "name": "BaseBdev4", 00:14:03.933 "uuid": "ed39d074-3fb9-58df-8ee9-769e1275aaa5", 00:14:03.933 "is_configured": true, 00:14:03.933 "data_offset": 0, 00:14:03.933 "data_size": 65536 00:14:03.933 } 00:14:03.933 ] 00:14:03.933 }' 00:14:03.933 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.192 [2024-11-15 09:32:52.470478] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:04.192 [2024-11-15 09:32:52.536990] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.192 "name": "raid_bdev1", 00:14:04.192 "uuid": "bd6a150b-bfce-4a61-8061-81b6e6363cb9", 00:14:04.192 "strip_size_kb": 0, 00:14:04.192 "state": "online", 00:14:04.192 "raid_level": "raid1", 00:14:04.192 "superblock": false, 00:14:04.192 "num_base_bdevs": 4, 00:14:04.192 "num_base_bdevs_discovered": 3, 00:14:04.192 "num_base_bdevs_operational": 3, 00:14:04.192 "process": { 00:14:04.192 "type": "rebuild", 00:14:04.192 "target": "spare", 00:14:04.192 "progress": { 00:14:04.192 "blocks": 24576, 00:14:04.192 "percent": 37 00:14:04.192 } 00:14:04.192 }, 00:14:04.192 "base_bdevs_list": [ 00:14:04.192 { 00:14:04.192 "name": "spare", 00:14:04.192 "uuid": "f2f7e268-c2c5-5340-90fc-b71599bdaa92", 00:14:04.192 "is_configured": true, 00:14:04.192 "data_offset": 0, 00:14:04.192 "data_size": 65536 00:14:04.192 }, 00:14:04.192 { 00:14:04.192 "name": null, 00:14:04.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.192 "is_configured": false, 00:14:04.192 "data_offset": 0, 00:14:04.192 "data_size": 65536 00:14:04.192 }, 00:14:04.192 { 00:14:04.192 "name": "BaseBdev3", 00:14:04.192 "uuid": "dd2c658f-b967-543e-a088-baefc26df4bc", 00:14:04.192 "is_configured": true, 00:14:04.192 "data_offset": 0, 00:14:04.192 "data_size": 65536 00:14:04.192 }, 00:14:04.192 { 00:14:04.192 "name": "BaseBdev4", 00:14:04.192 "uuid": "ed39d074-3fb9-58df-8ee9-769e1275aaa5", 00:14:04.192 "is_configured": true, 00:14:04.192 "data_offset": 0, 00:14:04.192 "data_size": 65536 00:14:04.192 } 00:14:04.192 ] 00:14:04.192 }' 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.192 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.451 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.451 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=466 00:14:04.451 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:04.451 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.451 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.451 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.451 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.451 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.451 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.451 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.451 09:32:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.451 09:32:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.451 09:32:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.451 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.451 "name": "raid_bdev1", 00:14:04.451 "uuid": "bd6a150b-bfce-4a61-8061-81b6e6363cb9", 00:14:04.451 "strip_size_kb": 0, 00:14:04.451 "state": "online", 00:14:04.451 "raid_level": "raid1", 00:14:04.451 "superblock": false, 00:14:04.451 "num_base_bdevs": 4, 00:14:04.451 "num_base_bdevs_discovered": 3, 00:14:04.451 "num_base_bdevs_operational": 3, 00:14:04.451 "process": { 00:14:04.451 "type": "rebuild", 00:14:04.451 "target": "spare", 00:14:04.451 "progress": { 00:14:04.451 "blocks": 26624, 00:14:04.451 "percent": 40 00:14:04.451 } 00:14:04.451 }, 00:14:04.451 "base_bdevs_list": [ 00:14:04.451 { 00:14:04.451 "name": "spare", 00:14:04.451 "uuid": "f2f7e268-c2c5-5340-90fc-b71599bdaa92", 00:14:04.451 "is_configured": true, 00:14:04.451 "data_offset": 0, 00:14:04.451 "data_size": 65536 00:14:04.451 }, 00:14:04.451 { 00:14:04.451 "name": null, 00:14:04.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.451 "is_configured": false, 00:14:04.451 "data_offset": 0, 00:14:04.451 "data_size": 65536 00:14:04.451 }, 00:14:04.451 { 00:14:04.451 "name": "BaseBdev3", 00:14:04.451 "uuid": "dd2c658f-b967-543e-a088-baefc26df4bc", 00:14:04.451 "is_configured": true, 00:14:04.451 "data_offset": 0, 00:14:04.451 "data_size": 65536 00:14:04.451 }, 00:14:04.451 { 00:14:04.451 "name": "BaseBdev4", 00:14:04.451 "uuid": "ed39d074-3fb9-58df-8ee9-769e1275aaa5", 00:14:04.451 "is_configured": true, 00:14:04.451 "data_offset": 0, 00:14:04.451 "data_size": 65536 00:14:04.451 } 00:14:04.451 ] 00:14:04.451 }' 00:14:04.451 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.451 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.451 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.451 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.451 09:32:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:05.395 09:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.395 09:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.395 09:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.395 09:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.395 09:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.395 09:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.395 09:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.395 09:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.395 09:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.395 09:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.655 09:32:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.655 09:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.655 "name": "raid_bdev1", 00:14:05.655 "uuid": "bd6a150b-bfce-4a61-8061-81b6e6363cb9", 00:14:05.655 "strip_size_kb": 0, 00:14:05.655 "state": "online", 00:14:05.655 "raid_level": "raid1", 00:14:05.655 "superblock": false, 00:14:05.655 "num_base_bdevs": 4, 00:14:05.655 "num_base_bdevs_discovered": 3, 00:14:05.655 "num_base_bdevs_operational": 3, 00:14:05.655 "process": { 00:14:05.655 "type": "rebuild", 00:14:05.655 "target": "spare", 00:14:05.655 "progress": { 00:14:05.655 "blocks": 51200, 00:14:05.655 "percent": 78 00:14:05.655 } 00:14:05.655 }, 00:14:05.655 "base_bdevs_list": [ 00:14:05.655 { 00:14:05.655 "name": "spare", 00:14:05.655 "uuid": "f2f7e268-c2c5-5340-90fc-b71599bdaa92", 00:14:05.655 "is_configured": true, 00:14:05.655 "data_offset": 0, 00:14:05.655 "data_size": 65536 00:14:05.655 }, 00:14:05.655 { 00:14:05.655 "name": null, 00:14:05.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.655 "is_configured": false, 00:14:05.655 "data_offset": 0, 00:14:05.655 "data_size": 65536 00:14:05.655 }, 00:14:05.655 { 00:14:05.655 "name": "BaseBdev3", 00:14:05.655 "uuid": "dd2c658f-b967-543e-a088-baefc26df4bc", 00:14:05.655 "is_configured": true, 00:14:05.655 "data_offset": 0, 00:14:05.655 "data_size": 65536 00:14:05.655 }, 00:14:05.655 { 00:14:05.655 "name": "BaseBdev4", 00:14:05.655 "uuid": "ed39d074-3fb9-58df-8ee9-769e1275aaa5", 00:14:05.655 "is_configured": true, 00:14:05.655 "data_offset": 0, 00:14:05.655 "data_size": 65536 00:14:05.655 } 00:14:05.655 ] 00:14:05.655 }' 00:14:05.655 09:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.655 09:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.655 09:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.655 09:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.655 09:32:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:06.221 [2024-11-15 09:32:54.546922] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:06.221 [2024-11-15 09:32:54.547119] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:06.221 [2024-11-15 09:32:54.547193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.790 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:06.790 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.790 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.791 "name": "raid_bdev1", 00:14:06.791 "uuid": "bd6a150b-bfce-4a61-8061-81b6e6363cb9", 00:14:06.791 "strip_size_kb": 0, 00:14:06.791 "state": "online", 00:14:06.791 "raid_level": "raid1", 00:14:06.791 "superblock": false, 00:14:06.791 "num_base_bdevs": 4, 00:14:06.791 "num_base_bdevs_discovered": 3, 00:14:06.791 "num_base_bdevs_operational": 3, 00:14:06.791 "base_bdevs_list": [ 00:14:06.791 { 00:14:06.791 "name": "spare", 00:14:06.791 "uuid": "f2f7e268-c2c5-5340-90fc-b71599bdaa92", 00:14:06.791 "is_configured": true, 00:14:06.791 "data_offset": 0, 00:14:06.791 "data_size": 65536 00:14:06.791 }, 00:14:06.791 { 00:14:06.791 "name": null, 00:14:06.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.791 "is_configured": false, 00:14:06.791 "data_offset": 0, 00:14:06.791 "data_size": 65536 00:14:06.791 }, 00:14:06.791 { 00:14:06.791 "name": "BaseBdev3", 00:14:06.791 "uuid": "dd2c658f-b967-543e-a088-baefc26df4bc", 00:14:06.791 "is_configured": true, 00:14:06.791 "data_offset": 0, 00:14:06.791 "data_size": 65536 00:14:06.791 }, 00:14:06.791 { 00:14:06.791 "name": "BaseBdev4", 00:14:06.791 "uuid": "ed39d074-3fb9-58df-8ee9-769e1275aaa5", 00:14:06.791 "is_configured": true, 00:14:06.791 "data_offset": 0, 00:14:06.791 "data_size": 65536 00:14:06.791 } 00:14:06.791 ] 00:14:06.791 }' 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.791 "name": "raid_bdev1", 00:14:06.791 "uuid": "bd6a150b-bfce-4a61-8061-81b6e6363cb9", 00:14:06.791 "strip_size_kb": 0, 00:14:06.791 "state": "online", 00:14:06.791 "raid_level": "raid1", 00:14:06.791 "superblock": false, 00:14:06.791 "num_base_bdevs": 4, 00:14:06.791 "num_base_bdevs_discovered": 3, 00:14:06.791 "num_base_bdevs_operational": 3, 00:14:06.791 "base_bdevs_list": [ 00:14:06.791 { 00:14:06.791 "name": "spare", 00:14:06.791 "uuid": "f2f7e268-c2c5-5340-90fc-b71599bdaa92", 00:14:06.791 "is_configured": true, 00:14:06.791 "data_offset": 0, 00:14:06.791 "data_size": 65536 00:14:06.791 }, 00:14:06.791 { 00:14:06.791 "name": null, 00:14:06.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.791 "is_configured": false, 00:14:06.791 "data_offset": 0, 00:14:06.791 "data_size": 65536 00:14:06.791 }, 00:14:06.791 { 00:14:06.791 "name": "BaseBdev3", 00:14:06.791 "uuid": "dd2c658f-b967-543e-a088-baefc26df4bc", 00:14:06.791 "is_configured": true, 00:14:06.791 "data_offset": 0, 00:14:06.791 "data_size": 65536 00:14:06.791 }, 00:14:06.791 { 00:14:06.791 "name": "BaseBdev4", 00:14:06.791 "uuid": "ed39d074-3fb9-58df-8ee9-769e1275aaa5", 00:14:06.791 "is_configured": true, 00:14:06.791 "data_offset": 0, 00:14:06.791 "data_size": 65536 00:14:06.791 } 00:14:06.791 ] 00:14:06.791 }' 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:06.791 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.051 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:07.051 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:07.051 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.051 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.051 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.051 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.051 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.051 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.051 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.051 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.051 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.051 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.051 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.051 09:32:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.051 09:32:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.051 09:32:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.051 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.051 "name": "raid_bdev1", 00:14:07.051 "uuid": "bd6a150b-bfce-4a61-8061-81b6e6363cb9", 00:14:07.051 "strip_size_kb": 0, 00:14:07.051 "state": "online", 00:14:07.051 "raid_level": "raid1", 00:14:07.051 "superblock": false, 00:14:07.051 "num_base_bdevs": 4, 00:14:07.051 "num_base_bdevs_discovered": 3, 00:14:07.051 "num_base_bdevs_operational": 3, 00:14:07.051 "base_bdevs_list": [ 00:14:07.051 { 00:14:07.051 "name": "spare", 00:14:07.051 "uuid": "f2f7e268-c2c5-5340-90fc-b71599bdaa92", 00:14:07.051 "is_configured": true, 00:14:07.051 "data_offset": 0, 00:14:07.051 "data_size": 65536 00:14:07.051 }, 00:14:07.051 { 00:14:07.051 "name": null, 00:14:07.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.051 "is_configured": false, 00:14:07.051 "data_offset": 0, 00:14:07.051 "data_size": 65536 00:14:07.051 }, 00:14:07.051 { 00:14:07.051 "name": "BaseBdev3", 00:14:07.051 "uuid": "dd2c658f-b967-543e-a088-baefc26df4bc", 00:14:07.051 "is_configured": true, 00:14:07.051 "data_offset": 0, 00:14:07.051 "data_size": 65536 00:14:07.051 }, 00:14:07.051 { 00:14:07.051 "name": "BaseBdev4", 00:14:07.051 "uuid": "ed39d074-3fb9-58df-8ee9-769e1275aaa5", 00:14:07.051 "is_configured": true, 00:14:07.051 "data_offset": 0, 00:14:07.051 "data_size": 65536 00:14:07.051 } 00:14:07.051 ] 00:14:07.051 }' 00:14:07.051 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.051 09:32:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.309 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:07.309 09:32:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.309 09:32:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.309 [2024-11-15 09:32:55.711841] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:07.309 [2024-11-15 09:32:55.711982] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:07.309 [2024-11-15 09:32:55.712102] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:07.310 [2024-11-15 09:32:55.712212] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:07.310 [2024-11-15 09:32:55.712268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:07.310 09:32:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.310 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.310 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:07.310 09:32:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.310 09:32:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.310 09:32:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.310 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:07.310 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:07.310 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:07.310 09:32:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:07.310 09:32:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.310 09:32:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:07.310 09:32:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:07.310 09:32:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:07.310 09:32:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:07.310 09:32:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:07.310 09:32:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:07.310 09:32:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:07.310 09:32:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:07.569 /dev/nbd0 00:14:07.569 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:07.569 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:07.569 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:07.569 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:14:07.569 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:07.569 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:07.569 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:07.569 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:14:07.569 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:07.569 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:07.569 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.569 1+0 records in 00:14:07.569 1+0 records out 00:14:07.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187793 s, 21.8 MB/s 00:14:07.569 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.829 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:14:07.829 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.829 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:07.829 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:14:07.829 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.829 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:07.829 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:07.829 /dev/nbd1 00:14:07.829 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:07.829 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:07.829 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:07.829 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:14:07.829 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:07.829 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:07.829 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:08.095 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:14:08.095 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:08.095 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:08.095 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:08.095 1+0 records in 00:14:08.095 1+0 records out 00:14:08.095 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000645161 s, 6.3 MB/s 00:14:08.095 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.095 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:14:08.095 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.095 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:08.095 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:14:08.095 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:08.095 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:08.095 09:32:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:08.095 09:32:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:08.095 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:08.095 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:08.095 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:08.095 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:08.095 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.095 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:08.363 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:08.363 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:08.363 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:08.363 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.363 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.363 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:08.363 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:08.363 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.363 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.363 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:08.622 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:08.622 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:08.622 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:08.622 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.622 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.622 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:08.622 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:08.622 09:32:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.622 09:32:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:08.622 09:32:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77944 00:14:08.622 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 77944 ']' 00:14:08.622 09:32:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 77944 00:14:08.622 09:32:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:14:08.623 09:32:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:08.623 09:32:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77944 00:14:08.623 killing process with pid 77944 00:14:08.623 Received shutdown signal, test time was about 60.000000 seconds 00:14:08.623 00:14:08.623 Latency(us) 00:14:08.623 [2024-11-15T09:32:57.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.623 [2024-11-15T09:32:57.086Z] =================================================================================================================== 00:14:08.623 [2024-11-15T09:32:57.086Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:08.623 09:32:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:08.623 09:32:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:08.623 09:32:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77944' 00:14:08.623 09:32:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 77944 00:14:08.623 [2024-11-15 09:32:57.042524] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:08.623 09:32:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 77944 00:14:09.191 [2024-11-15 09:32:57.552560] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:10.569 00:14:10.569 real 0m17.653s 00:14:10.569 user 0m19.988s 00:14:10.569 sys 0m3.192s 00:14:10.569 ************************************ 00:14:10.569 END TEST raid_rebuild_test 00:14:10.569 ************************************ 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.569 09:32:58 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:14:10.569 09:32:58 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:10.569 09:32:58 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:10.569 09:32:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:10.569 ************************************ 00:14:10.569 START TEST raid_rebuild_test_sb 00:14:10.569 ************************************ 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true false true 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78391 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78391 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 78391 ']' 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:10.569 09:32:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.569 [2024-11-15 09:32:58.899235] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:14:10.569 [2024-11-15 09:32:58.899491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:10.569 Zero copy mechanism will not be used. 00:14:10.569 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78391 ] 00:14:10.828 [2024-11-15 09:32:59.081603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.828 [2024-11-15 09:32:59.198296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.087 [2024-11-15 09:32:59.411592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.087 [2024-11-15 09:32:59.411714] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.347 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:11.347 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:14:11.347 09:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.347 09:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:11.347 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.347 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.347 BaseBdev1_malloc 00:14:11.347 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.347 09:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:11.347 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.347 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.347 [2024-11-15 09:32:59.795664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:11.347 [2024-11-15 09:32:59.795797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.347 [2024-11-15 09:32:59.795831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:11.347 [2024-11-15 09:32:59.795844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.347 [2024-11-15 09:32:59.798209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.347 [2024-11-15 09:32:59.798250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:11.347 BaseBdev1 00:14:11.347 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.347 09:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.347 09:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:11.347 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.347 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.607 BaseBdev2_malloc 00:14:11.607 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.607 09:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:11.607 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.607 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.607 [2024-11-15 09:32:59.856757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:11.607 [2024-11-15 09:32:59.856831] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.607 [2024-11-15 09:32:59.856869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:11.607 [2024-11-15 09:32:59.856885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.607 [2024-11-15 09:32:59.859226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.607 [2024-11-15 09:32:59.859267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:11.607 BaseBdev2 00:14:11.607 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.607 09:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.607 09:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:11.607 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.607 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.607 BaseBdev3_malloc 00:14:11.607 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.607 09:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:11.607 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.608 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.608 [2024-11-15 09:32:59.927539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:11.608 [2024-11-15 09:32:59.927640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.608 [2024-11-15 09:32:59.927680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:11.608 [2024-11-15 09:32:59.927715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.608 [2024-11-15 09:32:59.929863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.608 [2024-11-15 09:32:59.929941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:11.608 BaseBdev3 00:14:11.608 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.608 09:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:11.608 09:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:11.608 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.608 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.608 BaseBdev4_malloc 00:14:11.608 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.608 09:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:11.608 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.608 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.608 [2024-11-15 09:32:59.982856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:11.608 [2024-11-15 09:32:59.982928] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.608 [2024-11-15 09:32:59.982948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:11.608 [2024-11-15 09:32:59.982960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.608 [2024-11-15 09:32:59.984971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.608 [2024-11-15 09:32:59.985011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:11.608 BaseBdev4 00:14:11.608 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.608 09:32:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:11.608 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.608 09:32:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.608 spare_malloc 00:14:11.608 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.608 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:11.608 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.608 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.608 spare_delay 00:14:11.608 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.608 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:11.608 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.608 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.608 [2024-11-15 09:33:00.049958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:11.608 [2024-11-15 09:33:00.050014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.608 [2024-11-15 09:33:00.050033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:11.608 [2024-11-15 09:33:00.050044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.608 [2024-11-15 09:33:00.052023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.608 [2024-11-15 09:33:00.052144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:11.608 spare 00:14:11.608 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.608 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:11.608 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.608 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.608 [2024-11-15 09:33:00.062011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:11.608 [2024-11-15 09:33:00.063769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:11.608 [2024-11-15 09:33:00.063925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:11.608 [2024-11-15 09:33:00.063988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:11.608 [2024-11-15 09:33:00.064168] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:11.608 [2024-11-15 09:33:00.064185] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:11.608 [2024-11-15 09:33:00.064421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:11.608 [2024-11-15 09:33:00.064606] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:11.608 [2024-11-15 09:33:00.064616] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:11.608 [2024-11-15 09:33:00.064773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.608 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.608 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:11.608 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.608 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.608 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.608 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.608 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:11.608 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.608 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.608 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.608 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.867 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.867 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.867 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.867 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.867 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.867 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.867 "name": "raid_bdev1", 00:14:11.867 "uuid": "e396b484-905f-4072-b735-f11a5f4bf015", 00:14:11.867 "strip_size_kb": 0, 00:14:11.867 "state": "online", 00:14:11.867 "raid_level": "raid1", 00:14:11.867 "superblock": true, 00:14:11.867 "num_base_bdevs": 4, 00:14:11.867 "num_base_bdevs_discovered": 4, 00:14:11.867 "num_base_bdevs_operational": 4, 00:14:11.867 "base_bdevs_list": [ 00:14:11.867 { 00:14:11.867 "name": "BaseBdev1", 00:14:11.867 "uuid": "2b090a5f-212a-58d5-9fd8-2abb4e782a23", 00:14:11.867 "is_configured": true, 00:14:11.867 "data_offset": 2048, 00:14:11.867 "data_size": 63488 00:14:11.867 }, 00:14:11.867 { 00:14:11.867 "name": "BaseBdev2", 00:14:11.867 "uuid": "35228c01-0946-5d3d-a4e5-91b1ee681fb4", 00:14:11.867 "is_configured": true, 00:14:11.867 "data_offset": 2048, 00:14:11.867 "data_size": 63488 00:14:11.867 }, 00:14:11.867 { 00:14:11.867 "name": "BaseBdev3", 00:14:11.867 "uuid": "72cc031a-cc09-5331-9048-b6c711c1d3f5", 00:14:11.867 "is_configured": true, 00:14:11.867 "data_offset": 2048, 00:14:11.867 "data_size": 63488 00:14:11.867 }, 00:14:11.867 { 00:14:11.867 "name": "BaseBdev4", 00:14:11.867 "uuid": "0e3f4f15-eabe-5bef-9e16-f8a4a570e0bd", 00:14:11.867 "is_configured": true, 00:14:11.867 "data_offset": 2048, 00:14:11.867 "data_size": 63488 00:14:11.867 } 00:14:11.867 ] 00:14:11.867 }' 00:14:11.867 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.867 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.126 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:12.126 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:12.126 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.126 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.126 [2024-11-15 09:33:00.541627] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.126 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.126 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:12.126 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.126 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:12.126 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.126 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.126 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.386 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:12.386 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:12.386 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:12.386 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:12.386 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:12.386 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:12.386 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:12.386 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:12.386 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:12.386 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:12.386 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:12.386 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:12.386 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:12.386 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:12.386 [2024-11-15 09:33:00.800878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:12.386 /dev/nbd0 00:14:12.386 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:12.386 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:12.386 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:12.386 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:14:12.386 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:12.386 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:12.386 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:12.386 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:14:12.386 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:12.386 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:12.386 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:12.647 1+0 records in 00:14:12.647 1+0 records out 00:14:12.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036639 s, 11.2 MB/s 00:14:12.647 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.647 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:14:12.647 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.647 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:12.647 09:33:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:14:12.647 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:12.647 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:12.647 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:12.647 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:12.647 09:33:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:19.209 63488+0 records in 00:14:19.209 63488+0 records out 00:14:19.209 32505856 bytes (33 MB, 31 MiB) copied, 6.0561 s, 5.4 MB/s 00:14:19.209 09:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:19.209 09:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:19.209 09:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:19.209 09:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:19.209 09:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:19.209 09:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:19.209 09:33:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:19.209 [2024-11-15 09:33:07.141624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.209 09:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:19.209 09:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:19.209 09:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:19.209 09:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:19.209 09:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:19.209 09:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:19.209 09:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:19.209 09:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:19.209 09:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:19.209 09:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.209 09:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.209 [2024-11-15 09:33:07.181642] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:19.209 09:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.209 09:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:19.209 09:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.209 09:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.209 09:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.209 09:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.209 09:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.209 09:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.210 09:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.210 09:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.210 09:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.210 09:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.210 09:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.210 09:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.210 09:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.210 09:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.210 09:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.210 "name": "raid_bdev1", 00:14:19.210 "uuid": "e396b484-905f-4072-b735-f11a5f4bf015", 00:14:19.210 "strip_size_kb": 0, 00:14:19.210 "state": "online", 00:14:19.210 "raid_level": "raid1", 00:14:19.210 "superblock": true, 00:14:19.210 "num_base_bdevs": 4, 00:14:19.210 "num_base_bdevs_discovered": 3, 00:14:19.210 "num_base_bdevs_operational": 3, 00:14:19.210 "base_bdevs_list": [ 00:14:19.210 { 00:14:19.210 "name": null, 00:14:19.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.210 "is_configured": false, 00:14:19.210 "data_offset": 0, 00:14:19.210 "data_size": 63488 00:14:19.210 }, 00:14:19.210 { 00:14:19.210 "name": "BaseBdev2", 00:14:19.210 "uuid": "35228c01-0946-5d3d-a4e5-91b1ee681fb4", 00:14:19.210 "is_configured": true, 00:14:19.210 "data_offset": 2048, 00:14:19.210 "data_size": 63488 00:14:19.210 }, 00:14:19.210 { 00:14:19.210 "name": "BaseBdev3", 00:14:19.210 "uuid": "72cc031a-cc09-5331-9048-b6c711c1d3f5", 00:14:19.210 "is_configured": true, 00:14:19.210 "data_offset": 2048, 00:14:19.210 "data_size": 63488 00:14:19.210 }, 00:14:19.210 { 00:14:19.210 "name": "BaseBdev4", 00:14:19.210 "uuid": "0e3f4f15-eabe-5bef-9e16-f8a4a570e0bd", 00:14:19.210 "is_configured": true, 00:14:19.210 "data_offset": 2048, 00:14:19.210 "data_size": 63488 00:14:19.210 } 00:14:19.210 ] 00:14:19.210 }' 00:14:19.210 09:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.210 09:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.210 09:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:19.210 09:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.210 09:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.210 [2024-11-15 09:33:07.628938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:19.210 [2024-11-15 09:33:07.645505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:19.210 09:33:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.210 09:33:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:19.210 [2024-11-15 09:33:07.647604] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:20.586 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.586 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.586 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.586 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.586 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.586 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.586 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.586 09:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.586 09:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.586 09:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.586 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.586 "name": "raid_bdev1", 00:14:20.587 "uuid": "e396b484-905f-4072-b735-f11a5f4bf015", 00:14:20.587 "strip_size_kb": 0, 00:14:20.587 "state": "online", 00:14:20.587 "raid_level": "raid1", 00:14:20.587 "superblock": true, 00:14:20.587 "num_base_bdevs": 4, 00:14:20.587 "num_base_bdevs_discovered": 4, 00:14:20.587 "num_base_bdevs_operational": 4, 00:14:20.587 "process": { 00:14:20.587 "type": "rebuild", 00:14:20.587 "target": "spare", 00:14:20.587 "progress": { 00:14:20.587 "blocks": 20480, 00:14:20.587 "percent": 32 00:14:20.587 } 00:14:20.587 }, 00:14:20.587 "base_bdevs_list": [ 00:14:20.587 { 00:14:20.587 "name": "spare", 00:14:20.587 "uuid": "db30f54b-e8b4-5cdd-86a9-fb7eea019a2e", 00:14:20.587 "is_configured": true, 00:14:20.587 "data_offset": 2048, 00:14:20.587 "data_size": 63488 00:14:20.587 }, 00:14:20.587 { 00:14:20.587 "name": "BaseBdev2", 00:14:20.587 "uuid": "35228c01-0946-5d3d-a4e5-91b1ee681fb4", 00:14:20.587 "is_configured": true, 00:14:20.587 "data_offset": 2048, 00:14:20.587 "data_size": 63488 00:14:20.587 }, 00:14:20.587 { 00:14:20.587 "name": "BaseBdev3", 00:14:20.587 "uuid": "72cc031a-cc09-5331-9048-b6c711c1d3f5", 00:14:20.587 "is_configured": true, 00:14:20.587 "data_offset": 2048, 00:14:20.587 "data_size": 63488 00:14:20.587 }, 00:14:20.587 { 00:14:20.587 "name": "BaseBdev4", 00:14:20.587 "uuid": "0e3f4f15-eabe-5bef-9e16-f8a4a570e0bd", 00:14:20.587 "is_configured": true, 00:14:20.587 "data_offset": 2048, 00:14:20.587 "data_size": 63488 00:14:20.587 } 00:14:20.587 ] 00:14:20.587 }' 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.587 [2024-11-15 09:33:08.810992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:20.587 [2024-11-15 09:33:08.853198] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:20.587 [2024-11-15 09:33:08.853365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.587 [2024-11-15 09:33:08.853407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:20.587 [2024-11-15 09:33:08.853432] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.587 "name": "raid_bdev1", 00:14:20.587 "uuid": "e396b484-905f-4072-b735-f11a5f4bf015", 00:14:20.587 "strip_size_kb": 0, 00:14:20.587 "state": "online", 00:14:20.587 "raid_level": "raid1", 00:14:20.587 "superblock": true, 00:14:20.587 "num_base_bdevs": 4, 00:14:20.587 "num_base_bdevs_discovered": 3, 00:14:20.587 "num_base_bdevs_operational": 3, 00:14:20.587 "base_bdevs_list": [ 00:14:20.587 { 00:14:20.587 "name": null, 00:14:20.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.587 "is_configured": false, 00:14:20.587 "data_offset": 0, 00:14:20.587 "data_size": 63488 00:14:20.587 }, 00:14:20.587 { 00:14:20.587 "name": "BaseBdev2", 00:14:20.587 "uuid": "35228c01-0946-5d3d-a4e5-91b1ee681fb4", 00:14:20.587 "is_configured": true, 00:14:20.587 "data_offset": 2048, 00:14:20.587 "data_size": 63488 00:14:20.587 }, 00:14:20.587 { 00:14:20.587 "name": "BaseBdev3", 00:14:20.587 "uuid": "72cc031a-cc09-5331-9048-b6c711c1d3f5", 00:14:20.587 "is_configured": true, 00:14:20.587 "data_offset": 2048, 00:14:20.587 "data_size": 63488 00:14:20.587 }, 00:14:20.587 { 00:14:20.587 "name": "BaseBdev4", 00:14:20.587 "uuid": "0e3f4f15-eabe-5bef-9e16-f8a4a570e0bd", 00:14:20.587 "is_configured": true, 00:14:20.587 "data_offset": 2048, 00:14:20.587 "data_size": 63488 00:14:20.587 } 00:14:20.587 ] 00:14:20.587 }' 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.587 09:33:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.154 09:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:21.154 09:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.154 09:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:21.154 09:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:21.154 09:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.154 09:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.154 09:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.154 09:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.154 09:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.154 09:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.154 09:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.154 "name": "raid_bdev1", 00:14:21.154 "uuid": "e396b484-905f-4072-b735-f11a5f4bf015", 00:14:21.154 "strip_size_kb": 0, 00:14:21.154 "state": "online", 00:14:21.154 "raid_level": "raid1", 00:14:21.154 "superblock": true, 00:14:21.154 "num_base_bdevs": 4, 00:14:21.154 "num_base_bdevs_discovered": 3, 00:14:21.154 "num_base_bdevs_operational": 3, 00:14:21.154 "base_bdevs_list": [ 00:14:21.154 { 00:14:21.154 "name": null, 00:14:21.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.154 "is_configured": false, 00:14:21.154 "data_offset": 0, 00:14:21.154 "data_size": 63488 00:14:21.154 }, 00:14:21.154 { 00:14:21.154 "name": "BaseBdev2", 00:14:21.154 "uuid": "35228c01-0946-5d3d-a4e5-91b1ee681fb4", 00:14:21.154 "is_configured": true, 00:14:21.154 "data_offset": 2048, 00:14:21.154 "data_size": 63488 00:14:21.154 }, 00:14:21.154 { 00:14:21.154 "name": "BaseBdev3", 00:14:21.154 "uuid": "72cc031a-cc09-5331-9048-b6c711c1d3f5", 00:14:21.154 "is_configured": true, 00:14:21.154 "data_offset": 2048, 00:14:21.154 "data_size": 63488 00:14:21.154 }, 00:14:21.154 { 00:14:21.154 "name": "BaseBdev4", 00:14:21.154 "uuid": "0e3f4f15-eabe-5bef-9e16-f8a4a570e0bd", 00:14:21.154 "is_configured": true, 00:14:21.154 "data_offset": 2048, 00:14:21.154 "data_size": 63488 00:14:21.154 } 00:14:21.154 ] 00:14:21.154 }' 00:14:21.154 09:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.154 09:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:21.154 09:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.154 09:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:21.154 09:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:21.154 09:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.154 09:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.154 [2024-11-15 09:33:09.501049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:21.154 [2024-11-15 09:33:09.516294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:21.154 09:33:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.154 09:33:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:21.154 [2024-11-15 09:33:09.518485] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:22.087 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.087 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.087 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.087 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.087 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.087 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.087 09:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.087 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.087 09:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.087 09:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.345 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.345 "name": "raid_bdev1", 00:14:22.345 "uuid": "e396b484-905f-4072-b735-f11a5f4bf015", 00:14:22.345 "strip_size_kb": 0, 00:14:22.345 "state": "online", 00:14:22.345 "raid_level": "raid1", 00:14:22.345 "superblock": true, 00:14:22.345 "num_base_bdevs": 4, 00:14:22.345 "num_base_bdevs_discovered": 4, 00:14:22.345 "num_base_bdevs_operational": 4, 00:14:22.345 "process": { 00:14:22.345 "type": "rebuild", 00:14:22.345 "target": "spare", 00:14:22.345 "progress": { 00:14:22.345 "blocks": 20480, 00:14:22.345 "percent": 32 00:14:22.345 } 00:14:22.345 }, 00:14:22.345 "base_bdevs_list": [ 00:14:22.345 { 00:14:22.345 "name": "spare", 00:14:22.345 "uuid": "db30f54b-e8b4-5cdd-86a9-fb7eea019a2e", 00:14:22.345 "is_configured": true, 00:14:22.345 "data_offset": 2048, 00:14:22.345 "data_size": 63488 00:14:22.345 }, 00:14:22.345 { 00:14:22.345 "name": "BaseBdev2", 00:14:22.345 "uuid": "35228c01-0946-5d3d-a4e5-91b1ee681fb4", 00:14:22.345 "is_configured": true, 00:14:22.345 "data_offset": 2048, 00:14:22.345 "data_size": 63488 00:14:22.345 }, 00:14:22.345 { 00:14:22.345 "name": "BaseBdev3", 00:14:22.345 "uuid": "72cc031a-cc09-5331-9048-b6c711c1d3f5", 00:14:22.345 "is_configured": true, 00:14:22.345 "data_offset": 2048, 00:14:22.345 "data_size": 63488 00:14:22.345 }, 00:14:22.345 { 00:14:22.345 "name": "BaseBdev4", 00:14:22.345 "uuid": "0e3f4f15-eabe-5bef-9e16-f8a4a570e0bd", 00:14:22.345 "is_configured": true, 00:14:22.345 "data_offset": 2048, 00:14:22.345 "data_size": 63488 00:14:22.345 } 00:14:22.345 ] 00:14:22.345 }' 00:14:22.345 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.345 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.345 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.345 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.345 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:22.345 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:22.345 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:22.345 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:22.345 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:22.345 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:22.345 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:22.345 09:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.345 09:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.345 [2024-11-15 09:33:10.693319] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:22.604 [2024-11-15 09:33:10.824168] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.604 "name": "raid_bdev1", 00:14:22.604 "uuid": "e396b484-905f-4072-b735-f11a5f4bf015", 00:14:22.604 "strip_size_kb": 0, 00:14:22.604 "state": "online", 00:14:22.604 "raid_level": "raid1", 00:14:22.604 "superblock": true, 00:14:22.604 "num_base_bdevs": 4, 00:14:22.604 "num_base_bdevs_discovered": 3, 00:14:22.604 "num_base_bdevs_operational": 3, 00:14:22.604 "process": { 00:14:22.604 "type": "rebuild", 00:14:22.604 "target": "spare", 00:14:22.604 "progress": { 00:14:22.604 "blocks": 24576, 00:14:22.604 "percent": 38 00:14:22.604 } 00:14:22.604 }, 00:14:22.604 "base_bdevs_list": [ 00:14:22.604 { 00:14:22.604 "name": "spare", 00:14:22.604 "uuid": "db30f54b-e8b4-5cdd-86a9-fb7eea019a2e", 00:14:22.604 "is_configured": true, 00:14:22.604 "data_offset": 2048, 00:14:22.604 "data_size": 63488 00:14:22.604 }, 00:14:22.604 { 00:14:22.604 "name": null, 00:14:22.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.604 "is_configured": false, 00:14:22.604 "data_offset": 0, 00:14:22.604 "data_size": 63488 00:14:22.604 }, 00:14:22.604 { 00:14:22.604 "name": "BaseBdev3", 00:14:22.604 "uuid": "72cc031a-cc09-5331-9048-b6c711c1d3f5", 00:14:22.604 "is_configured": true, 00:14:22.604 "data_offset": 2048, 00:14:22.604 "data_size": 63488 00:14:22.604 }, 00:14:22.604 { 00:14:22.604 "name": "BaseBdev4", 00:14:22.604 "uuid": "0e3f4f15-eabe-5bef-9e16-f8a4a570e0bd", 00:14:22.604 "is_configured": true, 00:14:22.604 "data_offset": 2048, 00:14:22.604 "data_size": 63488 00:14:22.604 } 00:14:22.604 ] 00:14:22.604 }' 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=484 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.604 09:33:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.604 09:33:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.604 09:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.604 "name": "raid_bdev1", 00:14:22.604 "uuid": "e396b484-905f-4072-b735-f11a5f4bf015", 00:14:22.604 "strip_size_kb": 0, 00:14:22.604 "state": "online", 00:14:22.604 "raid_level": "raid1", 00:14:22.604 "superblock": true, 00:14:22.604 "num_base_bdevs": 4, 00:14:22.604 "num_base_bdevs_discovered": 3, 00:14:22.604 "num_base_bdevs_operational": 3, 00:14:22.604 "process": { 00:14:22.604 "type": "rebuild", 00:14:22.604 "target": "spare", 00:14:22.604 "progress": { 00:14:22.604 "blocks": 26624, 00:14:22.604 "percent": 41 00:14:22.604 } 00:14:22.604 }, 00:14:22.604 "base_bdevs_list": [ 00:14:22.604 { 00:14:22.604 "name": "spare", 00:14:22.604 "uuid": "db30f54b-e8b4-5cdd-86a9-fb7eea019a2e", 00:14:22.604 "is_configured": true, 00:14:22.604 "data_offset": 2048, 00:14:22.604 "data_size": 63488 00:14:22.604 }, 00:14:22.604 { 00:14:22.604 "name": null, 00:14:22.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.604 "is_configured": false, 00:14:22.604 "data_offset": 0, 00:14:22.604 "data_size": 63488 00:14:22.604 }, 00:14:22.604 { 00:14:22.604 "name": "BaseBdev3", 00:14:22.604 "uuid": "72cc031a-cc09-5331-9048-b6c711c1d3f5", 00:14:22.604 "is_configured": true, 00:14:22.604 "data_offset": 2048, 00:14:22.604 "data_size": 63488 00:14:22.604 }, 00:14:22.604 { 00:14:22.604 "name": "BaseBdev4", 00:14:22.604 "uuid": "0e3f4f15-eabe-5bef-9e16-f8a4a570e0bd", 00:14:22.604 "is_configured": true, 00:14:22.604 "data_offset": 2048, 00:14:22.604 "data_size": 63488 00:14:22.604 } 00:14:22.604 ] 00:14:22.604 }' 00:14:22.604 09:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.604 09:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.604 09:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.862 09:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.862 09:33:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:23.795 09:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:23.795 09:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.795 09:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.795 09:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.795 09:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.795 09:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.795 09:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.795 09:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.795 09:33:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.795 09:33:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.795 09:33:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.795 09:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.795 "name": "raid_bdev1", 00:14:23.795 "uuid": "e396b484-905f-4072-b735-f11a5f4bf015", 00:14:23.795 "strip_size_kb": 0, 00:14:23.795 "state": "online", 00:14:23.795 "raid_level": "raid1", 00:14:23.795 "superblock": true, 00:14:23.795 "num_base_bdevs": 4, 00:14:23.795 "num_base_bdevs_discovered": 3, 00:14:23.795 "num_base_bdevs_operational": 3, 00:14:23.795 "process": { 00:14:23.795 "type": "rebuild", 00:14:23.795 "target": "spare", 00:14:23.795 "progress": { 00:14:23.795 "blocks": 51200, 00:14:23.795 "percent": 80 00:14:23.795 } 00:14:23.795 }, 00:14:23.795 "base_bdevs_list": [ 00:14:23.795 { 00:14:23.795 "name": "spare", 00:14:23.795 "uuid": "db30f54b-e8b4-5cdd-86a9-fb7eea019a2e", 00:14:23.795 "is_configured": true, 00:14:23.795 "data_offset": 2048, 00:14:23.795 "data_size": 63488 00:14:23.795 }, 00:14:23.795 { 00:14:23.795 "name": null, 00:14:23.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.795 "is_configured": false, 00:14:23.795 "data_offset": 0, 00:14:23.795 "data_size": 63488 00:14:23.795 }, 00:14:23.795 { 00:14:23.795 "name": "BaseBdev3", 00:14:23.795 "uuid": "72cc031a-cc09-5331-9048-b6c711c1d3f5", 00:14:23.795 "is_configured": true, 00:14:23.795 "data_offset": 2048, 00:14:23.795 "data_size": 63488 00:14:23.795 }, 00:14:23.795 { 00:14:23.795 "name": "BaseBdev4", 00:14:23.795 "uuid": "0e3f4f15-eabe-5bef-9e16-f8a4a570e0bd", 00:14:23.795 "is_configured": true, 00:14:23.795 "data_offset": 2048, 00:14:23.795 "data_size": 63488 00:14:23.795 } 00:14:23.795 ] 00:14:23.795 }' 00:14:23.795 09:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.795 09:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.795 09:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.057 09:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.057 09:33:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:24.318 [2024-11-15 09:33:12.733238] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:24.318 [2024-11-15 09:33:12.733332] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:24.318 [2024-11-15 09:33:12.733488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.886 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:24.886 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.886 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.886 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.886 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.886 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.886 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.886 09:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.886 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.886 09:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.886 09:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.886 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.886 "name": "raid_bdev1", 00:14:24.886 "uuid": "e396b484-905f-4072-b735-f11a5f4bf015", 00:14:24.886 "strip_size_kb": 0, 00:14:24.886 "state": "online", 00:14:24.886 "raid_level": "raid1", 00:14:24.886 "superblock": true, 00:14:24.886 "num_base_bdevs": 4, 00:14:24.886 "num_base_bdevs_discovered": 3, 00:14:24.886 "num_base_bdevs_operational": 3, 00:14:24.886 "base_bdevs_list": [ 00:14:24.886 { 00:14:24.886 "name": "spare", 00:14:24.886 "uuid": "db30f54b-e8b4-5cdd-86a9-fb7eea019a2e", 00:14:24.886 "is_configured": true, 00:14:24.886 "data_offset": 2048, 00:14:24.886 "data_size": 63488 00:14:24.886 }, 00:14:24.886 { 00:14:24.886 "name": null, 00:14:24.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.886 "is_configured": false, 00:14:24.886 "data_offset": 0, 00:14:24.886 "data_size": 63488 00:14:24.886 }, 00:14:24.886 { 00:14:24.886 "name": "BaseBdev3", 00:14:24.886 "uuid": "72cc031a-cc09-5331-9048-b6c711c1d3f5", 00:14:24.886 "is_configured": true, 00:14:24.886 "data_offset": 2048, 00:14:24.886 "data_size": 63488 00:14:24.886 }, 00:14:24.886 { 00:14:24.886 "name": "BaseBdev4", 00:14:24.886 "uuid": "0e3f4f15-eabe-5bef-9e16-f8a4a570e0bd", 00:14:24.886 "is_configured": true, 00:14:24.886 "data_offset": 2048, 00:14:24.886 "data_size": 63488 00:14:24.886 } 00:14:24.886 ] 00:14:24.886 }' 00:14:24.886 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.145 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.146 "name": "raid_bdev1", 00:14:25.146 "uuid": "e396b484-905f-4072-b735-f11a5f4bf015", 00:14:25.146 "strip_size_kb": 0, 00:14:25.146 "state": "online", 00:14:25.146 "raid_level": "raid1", 00:14:25.146 "superblock": true, 00:14:25.146 "num_base_bdevs": 4, 00:14:25.146 "num_base_bdevs_discovered": 3, 00:14:25.146 "num_base_bdevs_operational": 3, 00:14:25.146 "base_bdevs_list": [ 00:14:25.146 { 00:14:25.146 "name": "spare", 00:14:25.146 "uuid": "db30f54b-e8b4-5cdd-86a9-fb7eea019a2e", 00:14:25.146 "is_configured": true, 00:14:25.146 "data_offset": 2048, 00:14:25.146 "data_size": 63488 00:14:25.146 }, 00:14:25.146 { 00:14:25.146 "name": null, 00:14:25.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.146 "is_configured": false, 00:14:25.146 "data_offset": 0, 00:14:25.146 "data_size": 63488 00:14:25.146 }, 00:14:25.146 { 00:14:25.146 "name": "BaseBdev3", 00:14:25.146 "uuid": "72cc031a-cc09-5331-9048-b6c711c1d3f5", 00:14:25.146 "is_configured": true, 00:14:25.146 "data_offset": 2048, 00:14:25.146 "data_size": 63488 00:14:25.146 }, 00:14:25.146 { 00:14:25.146 "name": "BaseBdev4", 00:14:25.146 "uuid": "0e3f4f15-eabe-5bef-9e16-f8a4a570e0bd", 00:14:25.146 "is_configured": true, 00:14:25.146 "data_offset": 2048, 00:14:25.146 "data_size": 63488 00:14:25.146 } 00:14:25.146 ] 00:14:25.146 }' 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.146 09:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.405 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.405 "name": "raid_bdev1", 00:14:25.405 "uuid": "e396b484-905f-4072-b735-f11a5f4bf015", 00:14:25.405 "strip_size_kb": 0, 00:14:25.405 "state": "online", 00:14:25.405 "raid_level": "raid1", 00:14:25.405 "superblock": true, 00:14:25.405 "num_base_bdevs": 4, 00:14:25.405 "num_base_bdevs_discovered": 3, 00:14:25.405 "num_base_bdevs_operational": 3, 00:14:25.405 "base_bdevs_list": [ 00:14:25.405 { 00:14:25.405 "name": "spare", 00:14:25.405 "uuid": "db30f54b-e8b4-5cdd-86a9-fb7eea019a2e", 00:14:25.405 "is_configured": true, 00:14:25.405 "data_offset": 2048, 00:14:25.405 "data_size": 63488 00:14:25.405 }, 00:14:25.405 { 00:14:25.405 "name": null, 00:14:25.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.405 "is_configured": false, 00:14:25.405 "data_offset": 0, 00:14:25.405 "data_size": 63488 00:14:25.405 }, 00:14:25.405 { 00:14:25.405 "name": "BaseBdev3", 00:14:25.405 "uuid": "72cc031a-cc09-5331-9048-b6c711c1d3f5", 00:14:25.405 "is_configured": true, 00:14:25.405 "data_offset": 2048, 00:14:25.405 "data_size": 63488 00:14:25.405 }, 00:14:25.405 { 00:14:25.405 "name": "BaseBdev4", 00:14:25.405 "uuid": "0e3f4f15-eabe-5bef-9e16-f8a4a570e0bd", 00:14:25.405 "is_configured": true, 00:14:25.405 "data_offset": 2048, 00:14:25.405 "data_size": 63488 00:14:25.405 } 00:14:25.405 ] 00:14:25.405 }' 00:14:25.405 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.405 09:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.665 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:25.665 09:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.665 09:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.665 [2024-11-15 09:33:13.937794] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:25.665 [2024-11-15 09:33:13.937832] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.665 [2024-11-15 09:33:13.937963] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.665 [2024-11-15 09:33:13.938060] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.665 [2024-11-15 09:33:13.938135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:25.665 09:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.665 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:25.665 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.665 09:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.665 09:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.665 09:33:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.665 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:25.665 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:25.665 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:25.665 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:25.665 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:25.665 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:25.665 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:25.665 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:25.665 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:25.665 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:25.665 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:25.665 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:25.665 09:33:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:25.923 /dev/nbd0 00:14:25.923 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:25.923 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:25.923 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:25.923 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:14:25.923 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:25.923 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:25.923 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:25.923 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:14:25.923 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:25.924 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:25.924 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.924 1+0 records in 00:14:25.924 1+0 records out 00:14:25.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442576 s, 9.3 MB/s 00:14:25.924 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.924 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:14:25.924 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.924 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:25.924 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:14:25.924 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:25.924 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:25.924 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:26.182 /dev/nbd1 00:14:26.182 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:26.182 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:26.182 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:26.182 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:14:26.182 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:26.182 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:26.182 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:26.182 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:14:26.182 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:26.182 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:26.182 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:26.182 1+0 records in 00:14:26.182 1+0 records out 00:14:26.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431698 s, 9.5 MB/s 00:14:26.182 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.182 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:14:26.182 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.182 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:26.182 09:33:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:14:26.182 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:26.182 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:26.182 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:26.440 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:26.440 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:26.440 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:26.440 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:26.440 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:26.440 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.440 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:26.698 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:26.698 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:26.698 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:26.698 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.698 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.698 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:26.698 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:26.698 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.698 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.698 09:33:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:26.957 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:26.957 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:26.957 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:26.957 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.957 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.957 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:26.957 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:26.957 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.957 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:26.957 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:26.957 09:33:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.957 09:33:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.957 09:33:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.957 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:26.957 09:33:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.957 09:33:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.957 [2024-11-15 09:33:15.256388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:26.957 [2024-11-15 09:33:15.256453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.957 [2024-11-15 09:33:15.256478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:26.957 [2024-11-15 09:33:15.256489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.957 [2024-11-15 09:33:15.258823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.957 [2024-11-15 09:33:15.258873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:26.957 [2024-11-15 09:33:15.258946] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:26.957 [2024-11-15 09:33:15.259003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.957 [2024-11-15 09:33:15.259185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:26.957 [2024-11-15 09:33:15.259295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:26.957 spare 00:14:26.957 09:33:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.957 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:26.957 09:33:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.957 09:33:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.957 [2024-11-15 09:33:15.359210] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:26.957 [2024-11-15 09:33:15.359253] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:26.957 [2024-11-15 09:33:15.359608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:26.957 [2024-11-15 09:33:15.359881] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:26.957 [2024-11-15 09:33:15.359910] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:26.957 [2024-11-15 09:33:15.360105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.957 09:33:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.958 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:26.958 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.958 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.958 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.958 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.958 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.958 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.958 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.958 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.958 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.958 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.958 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.958 09:33:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.958 09:33:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.958 09:33:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.958 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.958 "name": "raid_bdev1", 00:14:26.958 "uuid": "e396b484-905f-4072-b735-f11a5f4bf015", 00:14:26.958 "strip_size_kb": 0, 00:14:26.958 "state": "online", 00:14:26.958 "raid_level": "raid1", 00:14:26.958 "superblock": true, 00:14:26.958 "num_base_bdevs": 4, 00:14:26.958 "num_base_bdevs_discovered": 3, 00:14:26.958 "num_base_bdevs_operational": 3, 00:14:26.958 "base_bdevs_list": [ 00:14:26.958 { 00:14:26.958 "name": "spare", 00:14:26.958 "uuid": "db30f54b-e8b4-5cdd-86a9-fb7eea019a2e", 00:14:26.958 "is_configured": true, 00:14:26.958 "data_offset": 2048, 00:14:26.958 "data_size": 63488 00:14:26.958 }, 00:14:26.958 { 00:14:26.958 "name": null, 00:14:26.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.958 "is_configured": false, 00:14:26.958 "data_offset": 2048, 00:14:26.958 "data_size": 63488 00:14:26.958 }, 00:14:26.958 { 00:14:26.958 "name": "BaseBdev3", 00:14:26.958 "uuid": "72cc031a-cc09-5331-9048-b6c711c1d3f5", 00:14:26.958 "is_configured": true, 00:14:26.958 "data_offset": 2048, 00:14:26.958 "data_size": 63488 00:14:26.958 }, 00:14:26.958 { 00:14:26.958 "name": "BaseBdev4", 00:14:26.958 "uuid": "0e3f4f15-eabe-5bef-9e16-f8a4a570e0bd", 00:14:26.958 "is_configured": true, 00:14:26.958 "data_offset": 2048, 00:14:26.958 "data_size": 63488 00:14:26.958 } 00:14:26.958 ] 00:14:26.958 }' 00:14:26.958 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.958 09:33:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.526 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:27.526 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.526 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:27.526 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:27.526 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.526 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.526 09:33:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.526 09:33:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.526 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.526 09:33:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.526 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.526 "name": "raid_bdev1", 00:14:27.526 "uuid": "e396b484-905f-4072-b735-f11a5f4bf015", 00:14:27.526 "strip_size_kb": 0, 00:14:27.526 "state": "online", 00:14:27.526 "raid_level": "raid1", 00:14:27.526 "superblock": true, 00:14:27.526 "num_base_bdevs": 4, 00:14:27.526 "num_base_bdevs_discovered": 3, 00:14:27.526 "num_base_bdevs_operational": 3, 00:14:27.526 "base_bdevs_list": [ 00:14:27.526 { 00:14:27.526 "name": "spare", 00:14:27.526 "uuid": "db30f54b-e8b4-5cdd-86a9-fb7eea019a2e", 00:14:27.526 "is_configured": true, 00:14:27.526 "data_offset": 2048, 00:14:27.526 "data_size": 63488 00:14:27.526 }, 00:14:27.526 { 00:14:27.526 "name": null, 00:14:27.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.526 "is_configured": false, 00:14:27.526 "data_offset": 2048, 00:14:27.526 "data_size": 63488 00:14:27.526 }, 00:14:27.526 { 00:14:27.526 "name": "BaseBdev3", 00:14:27.526 "uuid": "72cc031a-cc09-5331-9048-b6c711c1d3f5", 00:14:27.526 "is_configured": true, 00:14:27.526 "data_offset": 2048, 00:14:27.526 "data_size": 63488 00:14:27.526 }, 00:14:27.526 { 00:14:27.526 "name": "BaseBdev4", 00:14:27.526 "uuid": "0e3f4f15-eabe-5bef-9e16-f8a4a570e0bd", 00:14:27.526 "is_configured": true, 00:14:27.526 "data_offset": 2048, 00:14:27.526 "data_size": 63488 00:14:27.526 } 00:14:27.526 ] 00:14:27.526 }' 00:14:27.526 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.526 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:27.526 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.526 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:27.526 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.526 09:33:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.526 09:33:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.526 09:33:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:27.786 09:33:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.786 09:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.786 09:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:27.786 09:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.786 09:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.786 [2024-11-15 09:33:16.023197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.786 09:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.786 09:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:27.786 09:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.786 09:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.786 09:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.786 09:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.786 09:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:27.786 09:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.786 09:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.786 09:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.786 09:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.786 09:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.786 09:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.786 09:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.786 09:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.786 09:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.786 09:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.786 "name": "raid_bdev1", 00:14:27.786 "uuid": "e396b484-905f-4072-b735-f11a5f4bf015", 00:14:27.786 "strip_size_kb": 0, 00:14:27.786 "state": "online", 00:14:27.786 "raid_level": "raid1", 00:14:27.786 "superblock": true, 00:14:27.786 "num_base_bdevs": 4, 00:14:27.786 "num_base_bdevs_discovered": 2, 00:14:27.786 "num_base_bdevs_operational": 2, 00:14:27.786 "base_bdevs_list": [ 00:14:27.786 { 00:14:27.786 "name": null, 00:14:27.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.786 "is_configured": false, 00:14:27.786 "data_offset": 0, 00:14:27.786 "data_size": 63488 00:14:27.786 }, 00:14:27.786 { 00:14:27.787 "name": null, 00:14:27.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.787 "is_configured": false, 00:14:27.787 "data_offset": 2048, 00:14:27.787 "data_size": 63488 00:14:27.787 }, 00:14:27.787 { 00:14:27.787 "name": "BaseBdev3", 00:14:27.787 "uuid": "72cc031a-cc09-5331-9048-b6c711c1d3f5", 00:14:27.787 "is_configured": true, 00:14:27.787 "data_offset": 2048, 00:14:27.787 "data_size": 63488 00:14:27.787 }, 00:14:27.787 { 00:14:27.787 "name": "BaseBdev4", 00:14:27.787 "uuid": "0e3f4f15-eabe-5bef-9e16-f8a4a570e0bd", 00:14:27.787 "is_configured": true, 00:14:27.787 "data_offset": 2048, 00:14:27.787 "data_size": 63488 00:14:27.787 } 00:14:27.787 ] 00:14:27.787 }' 00:14:27.787 09:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.787 09:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.046 09:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:28.046 09:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.046 09:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.046 [2024-11-15 09:33:16.470482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:28.046 [2024-11-15 09:33:16.470704] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:28.046 [2024-11-15 09:33:16.470719] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:28.046 [2024-11-15 09:33:16.470764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:28.046 [2024-11-15 09:33:16.486342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:28.046 09:33:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.046 09:33:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:28.046 [2024-11-15 09:33:16.488348] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.422 "name": "raid_bdev1", 00:14:29.422 "uuid": "e396b484-905f-4072-b735-f11a5f4bf015", 00:14:29.422 "strip_size_kb": 0, 00:14:29.422 "state": "online", 00:14:29.422 "raid_level": "raid1", 00:14:29.422 "superblock": true, 00:14:29.422 "num_base_bdevs": 4, 00:14:29.422 "num_base_bdevs_discovered": 3, 00:14:29.422 "num_base_bdevs_operational": 3, 00:14:29.422 "process": { 00:14:29.422 "type": "rebuild", 00:14:29.422 "target": "spare", 00:14:29.422 "progress": { 00:14:29.422 "blocks": 20480, 00:14:29.422 "percent": 32 00:14:29.422 } 00:14:29.422 }, 00:14:29.422 "base_bdevs_list": [ 00:14:29.422 { 00:14:29.422 "name": "spare", 00:14:29.422 "uuid": "db30f54b-e8b4-5cdd-86a9-fb7eea019a2e", 00:14:29.422 "is_configured": true, 00:14:29.422 "data_offset": 2048, 00:14:29.422 "data_size": 63488 00:14:29.422 }, 00:14:29.422 { 00:14:29.422 "name": null, 00:14:29.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.422 "is_configured": false, 00:14:29.422 "data_offset": 2048, 00:14:29.422 "data_size": 63488 00:14:29.422 }, 00:14:29.422 { 00:14:29.422 "name": "BaseBdev3", 00:14:29.422 "uuid": "72cc031a-cc09-5331-9048-b6c711c1d3f5", 00:14:29.422 "is_configured": true, 00:14:29.422 "data_offset": 2048, 00:14:29.422 "data_size": 63488 00:14:29.422 }, 00:14:29.422 { 00:14:29.422 "name": "BaseBdev4", 00:14:29.422 "uuid": "0e3f4f15-eabe-5bef-9e16-f8a4a570e0bd", 00:14:29.422 "is_configured": true, 00:14:29.422 "data_offset": 2048, 00:14:29.422 "data_size": 63488 00:14:29.422 } 00:14:29.422 ] 00:14:29.422 }' 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.422 [2024-11-15 09:33:17.632080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.422 [2024-11-15 09:33:17.694207] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:29.422 [2024-11-15 09:33:17.694281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.422 [2024-11-15 09:33:17.694318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.422 [2024-11-15 09:33:17.694326] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.422 "name": "raid_bdev1", 00:14:29.422 "uuid": "e396b484-905f-4072-b735-f11a5f4bf015", 00:14:29.422 "strip_size_kb": 0, 00:14:29.422 "state": "online", 00:14:29.422 "raid_level": "raid1", 00:14:29.422 "superblock": true, 00:14:29.422 "num_base_bdevs": 4, 00:14:29.422 "num_base_bdevs_discovered": 2, 00:14:29.422 "num_base_bdevs_operational": 2, 00:14:29.422 "base_bdevs_list": [ 00:14:29.422 { 00:14:29.422 "name": null, 00:14:29.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.422 "is_configured": false, 00:14:29.422 "data_offset": 0, 00:14:29.422 "data_size": 63488 00:14:29.422 }, 00:14:29.422 { 00:14:29.422 "name": null, 00:14:29.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.422 "is_configured": false, 00:14:29.422 "data_offset": 2048, 00:14:29.422 "data_size": 63488 00:14:29.422 }, 00:14:29.422 { 00:14:29.422 "name": "BaseBdev3", 00:14:29.422 "uuid": "72cc031a-cc09-5331-9048-b6c711c1d3f5", 00:14:29.422 "is_configured": true, 00:14:29.422 "data_offset": 2048, 00:14:29.422 "data_size": 63488 00:14:29.422 }, 00:14:29.422 { 00:14:29.422 "name": "BaseBdev4", 00:14:29.422 "uuid": "0e3f4f15-eabe-5bef-9e16-f8a4a570e0bd", 00:14:29.422 "is_configured": true, 00:14:29.422 "data_offset": 2048, 00:14:29.422 "data_size": 63488 00:14:29.422 } 00:14:29.422 ] 00:14:29.422 }' 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.422 09:33:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.991 09:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:29.991 09:33:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.991 09:33:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.991 [2024-11-15 09:33:18.189633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:29.991 [2024-11-15 09:33:18.189727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.991 [2024-11-15 09:33:18.189767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:29.991 [2024-11-15 09:33:18.189782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.991 [2024-11-15 09:33:18.190350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.991 [2024-11-15 09:33:18.190382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:29.991 [2024-11-15 09:33:18.190497] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:29.991 [2024-11-15 09:33:18.190515] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:29.991 [2024-11-15 09:33:18.190533] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:29.991 [2024-11-15 09:33:18.190561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:29.991 [2024-11-15 09:33:18.205867] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:29.992 spare 00:14:29.992 09:33:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.992 09:33:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:29.992 [2024-11-15 09:33:18.207749] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:30.926 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.926 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.926 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.926 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.926 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.927 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.927 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.927 09:33:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.927 09:33:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.927 09:33:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.927 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.927 "name": "raid_bdev1", 00:14:30.927 "uuid": "e396b484-905f-4072-b735-f11a5f4bf015", 00:14:30.927 "strip_size_kb": 0, 00:14:30.927 "state": "online", 00:14:30.927 "raid_level": "raid1", 00:14:30.927 "superblock": true, 00:14:30.927 "num_base_bdevs": 4, 00:14:30.927 "num_base_bdevs_discovered": 3, 00:14:30.927 "num_base_bdevs_operational": 3, 00:14:30.927 "process": { 00:14:30.927 "type": "rebuild", 00:14:30.927 "target": "spare", 00:14:30.927 "progress": { 00:14:30.927 "blocks": 20480, 00:14:30.927 "percent": 32 00:14:30.927 } 00:14:30.927 }, 00:14:30.927 "base_bdevs_list": [ 00:14:30.927 { 00:14:30.927 "name": "spare", 00:14:30.927 "uuid": "db30f54b-e8b4-5cdd-86a9-fb7eea019a2e", 00:14:30.927 "is_configured": true, 00:14:30.927 "data_offset": 2048, 00:14:30.927 "data_size": 63488 00:14:30.927 }, 00:14:30.927 { 00:14:30.927 "name": null, 00:14:30.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.927 "is_configured": false, 00:14:30.927 "data_offset": 2048, 00:14:30.927 "data_size": 63488 00:14:30.927 }, 00:14:30.927 { 00:14:30.927 "name": "BaseBdev3", 00:14:30.927 "uuid": "72cc031a-cc09-5331-9048-b6c711c1d3f5", 00:14:30.927 "is_configured": true, 00:14:30.927 "data_offset": 2048, 00:14:30.927 "data_size": 63488 00:14:30.927 }, 00:14:30.927 { 00:14:30.927 "name": "BaseBdev4", 00:14:30.927 "uuid": "0e3f4f15-eabe-5bef-9e16-f8a4a570e0bd", 00:14:30.927 "is_configured": true, 00:14:30.927 "data_offset": 2048, 00:14:30.927 "data_size": 63488 00:14:30.927 } 00:14:30.927 ] 00:14:30.927 }' 00:14:30.927 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.927 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:30.927 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.927 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:30.927 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:30.927 09:33:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.927 09:33:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.927 [2024-11-15 09:33:19.363211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:31.186 [2024-11-15 09:33:19.413422] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:31.186 [2024-11-15 09:33:19.413488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.186 [2024-11-15 09:33:19.413503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:31.186 [2024-11-15 09:33:19.413511] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:31.186 09:33:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.186 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:31.186 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.186 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.186 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.186 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.186 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:31.186 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.186 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.186 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.186 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.186 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.186 09:33:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.186 09:33:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.186 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.186 09:33:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.186 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.186 "name": "raid_bdev1", 00:14:31.186 "uuid": "e396b484-905f-4072-b735-f11a5f4bf015", 00:14:31.186 "strip_size_kb": 0, 00:14:31.186 "state": "online", 00:14:31.186 "raid_level": "raid1", 00:14:31.186 "superblock": true, 00:14:31.186 "num_base_bdevs": 4, 00:14:31.186 "num_base_bdevs_discovered": 2, 00:14:31.186 "num_base_bdevs_operational": 2, 00:14:31.186 "base_bdevs_list": [ 00:14:31.186 { 00:14:31.186 "name": null, 00:14:31.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.186 "is_configured": false, 00:14:31.186 "data_offset": 0, 00:14:31.186 "data_size": 63488 00:14:31.186 }, 00:14:31.186 { 00:14:31.186 "name": null, 00:14:31.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.186 "is_configured": false, 00:14:31.186 "data_offset": 2048, 00:14:31.186 "data_size": 63488 00:14:31.186 }, 00:14:31.186 { 00:14:31.186 "name": "BaseBdev3", 00:14:31.186 "uuid": "72cc031a-cc09-5331-9048-b6c711c1d3f5", 00:14:31.186 "is_configured": true, 00:14:31.186 "data_offset": 2048, 00:14:31.186 "data_size": 63488 00:14:31.186 }, 00:14:31.186 { 00:14:31.186 "name": "BaseBdev4", 00:14:31.186 "uuid": "0e3f4f15-eabe-5bef-9e16-f8a4a570e0bd", 00:14:31.186 "is_configured": true, 00:14:31.186 "data_offset": 2048, 00:14:31.186 "data_size": 63488 00:14:31.186 } 00:14:31.186 ] 00:14:31.186 }' 00:14:31.186 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.186 09:33:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.523 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:31.523 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.523 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:31.523 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:31.523 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.523 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.523 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.523 09:33:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.523 09:33:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.523 09:33:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.790 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.790 "name": "raid_bdev1", 00:14:31.790 "uuid": "e396b484-905f-4072-b735-f11a5f4bf015", 00:14:31.790 "strip_size_kb": 0, 00:14:31.790 "state": "online", 00:14:31.790 "raid_level": "raid1", 00:14:31.790 "superblock": true, 00:14:31.790 "num_base_bdevs": 4, 00:14:31.790 "num_base_bdevs_discovered": 2, 00:14:31.790 "num_base_bdevs_operational": 2, 00:14:31.790 "base_bdevs_list": [ 00:14:31.790 { 00:14:31.790 "name": null, 00:14:31.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.790 "is_configured": false, 00:14:31.790 "data_offset": 0, 00:14:31.790 "data_size": 63488 00:14:31.790 }, 00:14:31.790 { 00:14:31.790 "name": null, 00:14:31.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.790 "is_configured": false, 00:14:31.790 "data_offset": 2048, 00:14:31.790 "data_size": 63488 00:14:31.790 }, 00:14:31.790 { 00:14:31.790 "name": "BaseBdev3", 00:14:31.790 "uuid": "72cc031a-cc09-5331-9048-b6c711c1d3f5", 00:14:31.790 "is_configured": true, 00:14:31.790 "data_offset": 2048, 00:14:31.790 "data_size": 63488 00:14:31.790 }, 00:14:31.790 { 00:14:31.790 "name": "BaseBdev4", 00:14:31.790 "uuid": "0e3f4f15-eabe-5bef-9e16-f8a4a570e0bd", 00:14:31.790 "is_configured": true, 00:14:31.790 "data_offset": 2048, 00:14:31.790 "data_size": 63488 00:14:31.790 } 00:14:31.790 ] 00:14:31.790 }' 00:14:31.790 09:33:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.790 09:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:31.790 09:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.790 09:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:31.790 09:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:31.790 09:33:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.790 09:33:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.790 09:33:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.790 09:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:31.790 09:33:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.790 09:33:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.790 [2024-11-15 09:33:20.086693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:31.790 [2024-11-15 09:33:20.086751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.790 [2024-11-15 09:33:20.086772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:31.790 [2024-11-15 09:33:20.086783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.790 [2024-11-15 09:33:20.087243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.790 [2024-11-15 09:33:20.087269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:31.790 [2024-11-15 09:33:20.087348] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:31.790 [2024-11-15 09:33:20.087368] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:31.790 [2024-11-15 09:33:20.087376] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:31.790 [2024-11-15 09:33:20.087401] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:31.790 BaseBdev1 00:14:31.790 09:33:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.790 09:33:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:32.724 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:32.724 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.724 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.724 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.724 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.724 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:32.724 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.724 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.724 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.724 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.724 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.724 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.724 09:33:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.724 09:33:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.724 09:33:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.724 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.724 "name": "raid_bdev1", 00:14:32.724 "uuid": "e396b484-905f-4072-b735-f11a5f4bf015", 00:14:32.724 "strip_size_kb": 0, 00:14:32.724 "state": "online", 00:14:32.724 "raid_level": "raid1", 00:14:32.724 "superblock": true, 00:14:32.724 "num_base_bdevs": 4, 00:14:32.724 "num_base_bdevs_discovered": 2, 00:14:32.724 "num_base_bdevs_operational": 2, 00:14:32.724 "base_bdevs_list": [ 00:14:32.724 { 00:14:32.724 "name": null, 00:14:32.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.724 "is_configured": false, 00:14:32.724 "data_offset": 0, 00:14:32.724 "data_size": 63488 00:14:32.724 }, 00:14:32.724 { 00:14:32.725 "name": null, 00:14:32.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.725 "is_configured": false, 00:14:32.725 "data_offset": 2048, 00:14:32.725 "data_size": 63488 00:14:32.725 }, 00:14:32.725 { 00:14:32.725 "name": "BaseBdev3", 00:14:32.725 "uuid": "72cc031a-cc09-5331-9048-b6c711c1d3f5", 00:14:32.725 "is_configured": true, 00:14:32.725 "data_offset": 2048, 00:14:32.725 "data_size": 63488 00:14:32.725 }, 00:14:32.725 { 00:14:32.725 "name": "BaseBdev4", 00:14:32.725 "uuid": "0e3f4f15-eabe-5bef-9e16-f8a4a570e0bd", 00:14:32.725 "is_configured": true, 00:14:32.725 "data_offset": 2048, 00:14:32.725 "data_size": 63488 00:14:32.725 } 00:14:32.725 ] 00:14:32.725 }' 00:14:32.725 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.725 09:33:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.291 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:33.291 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.291 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:33.291 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:33.291 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.291 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.291 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.291 09:33:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.291 09:33:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.291 09:33:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.291 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.291 "name": "raid_bdev1", 00:14:33.291 "uuid": "e396b484-905f-4072-b735-f11a5f4bf015", 00:14:33.291 "strip_size_kb": 0, 00:14:33.291 "state": "online", 00:14:33.291 "raid_level": "raid1", 00:14:33.291 "superblock": true, 00:14:33.291 "num_base_bdevs": 4, 00:14:33.291 "num_base_bdevs_discovered": 2, 00:14:33.291 "num_base_bdevs_operational": 2, 00:14:33.291 "base_bdevs_list": [ 00:14:33.291 { 00:14:33.291 "name": null, 00:14:33.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.291 "is_configured": false, 00:14:33.291 "data_offset": 0, 00:14:33.291 "data_size": 63488 00:14:33.291 }, 00:14:33.291 { 00:14:33.291 "name": null, 00:14:33.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.292 "is_configured": false, 00:14:33.292 "data_offset": 2048, 00:14:33.292 "data_size": 63488 00:14:33.292 }, 00:14:33.292 { 00:14:33.292 "name": "BaseBdev3", 00:14:33.292 "uuid": "72cc031a-cc09-5331-9048-b6c711c1d3f5", 00:14:33.292 "is_configured": true, 00:14:33.292 "data_offset": 2048, 00:14:33.292 "data_size": 63488 00:14:33.292 }, 00:14:33.292 { 00:14:33.292 "name": "BaseBdev4", 00:14:33.292 "uuid": "0e3f4f15-eabe-5bef-9e16-f8a4a570e0bd", 00:14:33.292 "is_configured": true, 00:14:33.292 "data_offset": 2048, 00:14:33.292 "data_size": 63488 00:14:33.292 } 00:14:33.292 ] 00:14:33.292 }' 00:14:33.292 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.292 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:33.292 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.292 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:33.292 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:33.292 09:33:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:33.292 09:33:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:33.292 09:33:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:33.292 09:33:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:33.292 09:33:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:33.292 09:33:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:33.292 09:33:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:33.292 09:33:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.292 09:33:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.292 [2024-11-15 09:33:21.708077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:33.292 [2024-11-15 09:33:21.708275] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:33.292 [2024-11-15 09:33:21.708290] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:33.292 request: 00:14:33.292 { 00:14:33.292 "base_bdev": "BaseBdev1", 00:14:33.292 "raid_bdev": "raid_bdev1", 00:14:33.292 "method": "bdev_raid_add_base_bdev", 00:14:33.292 "req_id": 1 00:14:33.292 } 00:14:33.292 Got JSON-RPC error response 00:14:33.292 response: 00:14:33.292 { 00:14:33.292 "code": -22, 00:14:33.292 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:33.292 } 00:14:33.292 09:33:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:33.292 09:33:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:33.292 09:33:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:33.292 09:33:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:33.292 09:33:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:33.292 09:33:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:34.669 09:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:34.669 09:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.669 09:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.669 09:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.669 09:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.669 09:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:34.669 09:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.669 09:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.669 09:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.669 09:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.669 09:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.669 09:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.669 09:33:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.669 09:33:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.669 09:33:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.669 09:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.669 "name": "raid_bdev1", 00:14:34.669 "uuid": "e396b484-905f-4072-b735-f11a5f4bf015", 00:14:34.669 "strip_size_kb": 0, 00:14:34.669 "state": "online", 00:14:34.669 "raid_level": "raid1", 00:14:34.669 "superblock": true, 00:14:34.669 "num_base_bdevs": 4, 00:14:34.669 "num_base_bdevs_discovered": 2, 00:14:34.669 "num_base_bdevs_operational": 2, 00:14:34.669 "base_bdevs_list": [ 00:14:34.669 { 00:14:34.669 "name": null, 00:14:34.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.669 "is_configured": false, 00:14:34.669 "data_offset": 0, 00:14:34.669 "data_size": 63488 00:14:34.669 }, 00:14:34.669 { 00:14:34.669 "name": null, 00:14:34.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.669 "is_configured": false, 00:14:34.669 "data_offset": 2048, 00:14:34.669 "data_size": 63488 00:14:34.669 }, 00:14:34.669 { 00:14:34.669 "name": "BaseBdev3", 00:14:34.669 "uuid": "72cc031a-cc09-5331-9048-b6c711c1d3f5", 00:14:34.669 "is_configured": true, 00:14:34.669 "data_offset": 2048, 00:14:34.669 "data_size": 63488 00:14:34.669 }, 00:14:34.669 { 00:14:34.669 "name": "BaseBdev4", 00:14:34.669 "uuid": "0e3f4f15-eabe-5bef-9e16-f8a4a570e0bd", 00:14:34.669 "is_configured": true, 00:14:34.669 "data_offset": 2048, 00:14:34.669 "data_size": 63488 00:14:34.669 } 00:14:34.669 ] 00:14:34.669 }' 00:14:34.669 09:33:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.669 09:33:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.928 "name": "raid_bdev1", 00:14:34.928 "uuid": "e396b484-905f-4072-b735-f11a5f4bf015", 00:14:34.928 "strip_size_kb": 0, 00:14:34.928 "state": "online", 00:14:34.928 "raid_level": "raid1", 00:14:34.928 "superblock": true, 00:14:34.928 "num_base_bdevs": 4, 00:14:34.928 "num_base_bdevs_discovered": 2, 00:14:34.928 "num_base_bdevs_operational": 2, 00:14:34.928 "base_bdevs_list": [ 00:14:34.928 { 00:14:34.928 "name": null, 00:14:34.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.928 "is_configured": false, 00:14:34.928 "data_offset": 0, 00:14:34.928 "data_size": 63488 00:14:34.928 }, 00:14:34.928 { 00:14:34.928 "name": null, 00:14:34.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.928 "is_configured": false, 00:14:34.928 "data_offset": 2048, 00:14:34.928 "data_size": 63488 00:14:34.928 }, 00:14:34.928 { 00:14:34.928 "name": "BaseBdev3", 00:14:34.928 "uuid": "72cc031a-cc09-5331-9048-b6c711c1d3f5", 00:14:34.928 "is_configured": true, 00:14:34.928 "data_offset": 2048, 00:14:34.928 "data_size": 63488 00:14:34.928 }, 00:14:34.928 { 00:14:34.928 "name": "BaseBdev4", 00:14:34.928 "uuid": "0e3f4f15-eabe-5bef-9e16-f8a4a570e0bd", 00:14:34.928 "is_configured": true, 00:14:34.928 "data_offset": 2048, 00:14:34.928 "data_size": 63488 00:14:34.928 } 00:14:34.928 ] 00:14:34.928 }' 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78391 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 78391 ']' 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 78391 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78391 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:34.928 killing process with pid 78391 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78391' 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 78391 00:14:34.928 Received shutdown signal, test time was about 60.000000 seconds 00:14:34.928 00:14:34.928 Latency(us) 00:14:34.928 [2024-11-15T09:33:23.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.928 [2024-11-15T09:33:23.391Z] =================================================================================================================== 00:14:34.928 [2024-11-15T09:33:23.391Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:34.928 [2024-11-15 09:33:23.363886] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:34.928 [2024-11-15 09:33:23.364061] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.928 09:33:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 78391 00:14:34.928 [2024-11-15 09:33:23.364152] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:34.928 [2024-11-15 09:33:23.364164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:35.549 [2024-11-15 09:33:23.871953] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:36.947 09:33:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:36.947 00:14:36.947 real 0m26.218s 00:14:36.947 user 0m31.377s 00:14:36.947 sys 0m4.298s 00:14:36.947 09:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:36.947 09:33:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.947 ************************************ 00:14:36.947 END TEST raid_rebuild_test_sb 00:14:36.947 ************************************ 00:14:36.947 09:33:25 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:36.947 09:33:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:36.947 09:33:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:36.947 09:33:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:36.947 ************************************ 00:14:36.947 START TEST raid_rebuild_test_io 00:14:36.947 ************************************ 00:14:36.947 09:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false true true 00:14:36.947 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:36.947 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:36.947 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:36.947 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:36.947 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:36.947 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:36.947 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:36.947 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:36.947 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:36.947 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:36.947 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:36.947 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:36.947 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:36.947 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:36.947 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:36.947 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:36.947 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:36.947 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:36.947 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:36.948 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:36.948 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:36.948 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:36.948 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:36.948 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:36.948 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:36.948 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:36.948 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:36.948 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:36.948 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:36.948 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79156 00:14:36.948 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:36.948 09:33:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79156 00:14:36.948 09:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 79156 ']' 00:14:36.948 09:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.948 09:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:36.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.948 09:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.948 09:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:36.948 09:33:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.948 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:36.948 Zero copy mechanism will not be used. 00:14:36.948 [2024-11-15 09:33:25.198223] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:14:36.948 [2024-11-15 09:33:25.198435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79156 ] 00:14:36.948 [2024-11-15 09:33:25.392764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.207 [2024-11-15 09:33:25.516089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.467 [2024-11-15 09:33:25.735433] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:37.467 [2024-11-15 09:33:25.735506] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:37.726 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:37.726 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:14:37.726 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:37.726 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:37.726 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.726 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.726 BaseBdev1_malloc 00:14:37.726 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.726 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:37.726 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.726 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.726 [2024-11-15 09:33:26.062241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:37.726 [2024-11-15 09:33:26.062310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.726 [2024-11-15 09:33:26.062334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:37.726 [2024-11-15 09:33:26.062346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.726 [2024-11-15 09:33:26.064545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.726 [2024-11-15 09:33:26.064590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:37.726 BaseBdev1 00:14:37.726 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.726 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:37.727 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:37.727 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.727 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.727 BaseBdev2_malloc 00:14:37.727 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.727 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:37.727 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.727 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.727 [2024-11-15 09:33:26.117711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:37.727 [2024-11-15 09:33:26.117778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.727 [2024-11-15 09:33:26.117798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:37.727 [2024-11-15 09:33:26.117812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.727 [2024-11-15 09:33:26.120145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.727 [2024-11-15 09:33:26.120196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:37.727 BaseBdev2 00:14:37.727 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.727 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:37.727 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:37.727 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.727 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.727 BaseBdev3_malloc 00:14:37.727 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.727 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:37.727 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.727 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.727 [2024-11-15 09:33:26.184409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:37.727 [2024-11-15 09:33:26.184473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.727 [2024-11-15 09:33:26.184515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:37.727 [2024-11-15 09:33:26.184528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.727 [2024-11-15 09:33:26.186748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.727 [2024-11-15 09:33:26.186802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:37.727 BaseBdev3 00:14:37.727 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.727 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:37.727 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:37.727 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.727 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.986 BaseBdev4_malloc 00:14:37.986 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.986 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:37.986 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.986 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.986 [2024-11-15 09:33:26.240079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:37.986 [2024-11-15 09:33:26.240137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.986 [2024-11-15 09:33:26.240176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:37.986 [2024-11-15 09:33:26.240189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.986 [2024-11-15 09:33:26.242399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.986 [2024-11-15 09:33:26.242442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:37.986 BaseBdev4 00:14:37.986 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.987 spare_malloc 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.987 spare_delay 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.987 [2024-11-15 09:33:26.309054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:37.987 [2024-11-15 09:33:26.309123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.987 [2024-11-15 09:33:26.309163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:37.987 [2024-11-15 09:33:26.309174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.987 [2024-11-15 09:33:26.311269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.987 [2024-11-15 09:33:26.311310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:37.987 spare 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.987 [2024-11-15 09:33:26.321099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.987 [2024-11-15 09:33:26.323058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:37.987 [2024-11-15 09:33:26.323135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:37.987 [2024-11-15 09:33:26.323195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:37.987 [2024-11-15 09:33:26.323283] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:37.987 [2024-11-15 09:33:26.323298] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:37.987 [2024-11-15 09:33:26.323584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:37.987 [2024-11-15 09:33:26.323791] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:37.987 [2024-11-15 09:33:26.323812] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:37.987 [2024-11-15 09:33:26.324042] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.987 "name": "raid_bdev1", 00:14:37.987 "uuid": "403d0d50-415f-4cae-8762-243a13789b47", 00:14:37.987 "strip_size_kb": 0, 00:14:37.987 "state": "online", 00:14:37.987 "raid_level": "raid1", 00:14:37.987 "superblock": false, 00:14:37.987 "num_base_bdevs": 4, 00:14:37.987 "num_base_bdevs_discovered": 4, 00:14:37.987 "num_base_bdevs_operational": 4, 00:14:37.987 "base_bdevs_list": [ 00:14:37.987 { 00:14:37.987 "name": "BaseBdev1", 00:14:37.987 "uuid": "17d99896-f73b-5b4a-81d9-36f6903e4806", 00:14:37.987 "is_configured": true, 00:14:37.987 "data_offset": 0, 00:14:37.987 "data_size": 65536 00:14:37.987 }, 00:14:37.987 { 00:14:37.987 "name": "BaseBdev2", 00:14:37.987 "uuid": "18d62af3-bad0-5b11-9975-ac7517c79078", 00:14:37.987 "is_configured": true, 00:14:37.987 "data_offset": 0, 00:14:37.987 "data_size": 65536 00:14:37.987 }, 00:14:37.987 { 00:14:37.987 "name": "BaseBdev3", 00:14:37.987 "uuid": "7502130e-2763-503e-a5a0-4849c4ca482b", 00:14:37.987 "is_configured": true, 00:14:37.987 "data_offset": 0, 00:14:37.987 "data_size": 65536 00:14:37.987 }, 00:14:37.987 { 00:14:37.987 "name": "BaseBdev4", 00:14:37.987 "uuid": "8861a2ba-8caf-5270-80fd-1b001776930d", 00:14:37.987 "is_configured": true, 00:14:37.987 "data_offset": 0, 00:14:37.987 "data_size": 65536 00:14:37.987 } 00:14:37.987 ] 00:14:37.987 }' 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.987 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.556 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:38.556 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.556 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.556 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:38.556 [2024-11-15 09:33:26.772795] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:38.556 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.556 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:38.556 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.556 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:38.556 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.556 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.556 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.556 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:38.556 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:38.556 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:38.556 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.556 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.556 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:38.556 [2024-11-15 09:33:26.864199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:38.556 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.556 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:38.556 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.557 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.557 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.557 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.557 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.557 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.557 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.557 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.557 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.557 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.557 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.557 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.557 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.557 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.557 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.557 "name": "raid_bdev1", 00:14:38.557 "uuid": "403d0d50-415f-4cae-8762-243a13789b47", 00:14:38.557 "strip_size_kb": 0, 00:14:38.557 "state": "online", 00:14:38.557 "raid_level": "raid1", 00:14:38.557 "superblock": false, 00:14:38.557 "num_base_bdevs": 4, 00:14:38.557 "num_base_bdevs_discovered": 3, 00:14:38.557 "num_base_bdevs_operational": 3, 00:14:38.557 "base_bdevs_list": [ 00:14:38.557 { 00:14:38.557 "name": null, 00:14:38.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.557 "is_configured": false, 00:14:38.557 "data_offset": 0, 00:14:38.557 "data_size": 65536 00:14:38.557 }, 00:14:38.557 { 00:14:38.557 "name": "BaseBdev2", 00:14:38.557 "uuid": "18d62af3-bad0-5b11-9975-ac7517c79078", 00:14:38.557 "is_configured": true, 00:14:38.557 "data_offset": 0, 00:14:38.557 "data_size": 65536 00:14:38.557 }, 00:14:38.557 { 00:14:38.557 "name": "BaseBdev3", 00:14:38.557 "uuid": "7502130e-2763-503e-a5a0-4849c4ca482b", 00:14:38.557 "is_configured": true, 00:14:38.557 "data_offset": 0, 00:14:38.557 "data_size": 65536 00:14:38.557 }, 00:14:38.557 { 00:14:38.557 "name": "BaseBdev4", 00:14:38.557 "uuid": "8861a2ba-8caf-5270-80fd-1b001776930d", 00:14:38.557 "is_configured": true, 00:14:38.557 "data_offset": 0, 00:14:38.557 "data_size": 65536 00:14:38.557 } 00:14:38.557 ] 00:14:38.557 }' 00:14:38.557 09:33:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.557 09:33:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.557 [2024-11-15 09:33:26.972603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:38.557 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:38.557 Zero copy mechanism will not be used. 00:14:38.557 Running I/O for 60 seconds... 00:14:39.125 09:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:39.125 09:33:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.125 09:33:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.125 [2024-11-15 09:33:27.323212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:39.125 09:33:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.125 09:33:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:39.125 [2024-11-15 09:33:27.377221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:39.125 [2024-11-15 09:33:27.379441] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:39.125 [2024-11-15 09:33:27.481200] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:39.126 [2024-11-15 09:33:27.482003] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:39.398 [2024-11-15 09:33:27.615980] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:39.398 [2024-11-15 09:33:27.616333] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:39.669 [2024-11-15 09:33:27.956508] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:39.669 [2024-11-15 09:33:27.957188] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:39.669 158.00 IOPS, 474.00 MiB/s [2024-11-15T09:33:28.132Z] [2024-11-15 09:33:28.090840] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:39.928 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.928 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.928 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.928 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.928 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.928 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.928 09:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.928 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.928 09:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.187 09:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.187 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.187 "name": "raid_bdev1", 00:14:40.187 "uuid": "403d0d50-415f-4cae-8762-243a13789b47", 00:14:40.187 "strip_size_kb": 0, 00:14:40.187 "state": "online", 00:14:40.187 "raid_level": "raid1", 00:14:40.187 "superblock": false, 00:14:40.187 "num_base_bdevs": 4, 00:14:40.187 "num_base_bdevs_discovered": 4, 00:14:40.187 "num_base_bdevs_operational": 4, 00:14:40.187 "process": { 00:14:40.187 "type": "rebuild", 00:14:40.187 "target": "spare", 00:14:40.187 "progress": { 00:14:40.187 "blocks": 12288, 00:14:40.187 "percent": 18 00:14:40.187 } 00:14:40.187 }, 00:14:40.187 "base_bdevs_list": [ 00:14:40.187 { 00:14:40.187 "name": "spare", 00:14:40.187 "uuid": "eac56a47-6b40-5a74-a0e8-835cf1c19f9f", 00:14:40.187 "is_configured": true, 00:14:40.187 "data_offset": 0, 00:14:40.187 "data_size": 65536 00:14:40.187 }, 00:14:40.187 { 00:14:40.187 "name": "BaseBdev2", 00:14:40.187 "uuid": "18d62af3-bad0-5b11-9975-ac7517c79078", 00:14:40.187 "is_configured": true, 00:14:40.187 "data_offset": 0, 00:14:40.187 "data_size": 65536 00:14:40.187 }, 00:14:40.187 { 00:14:40.187 "name": "BaseBdev3", 00:14:40.187 "uuid": "7502130e-2763-503e-a5a0-4849c4ca482b", 00:14:40.187 "is_configured": true, 00:14:40.187 "data_offset": 0, 00:14:40.187 "data_size": 65536 00:14:40.187 }, 00:14:40.187 { 00:14:40.187 "name": "BaseBdev4", 00:14:40.187 "uuid": "8861a2ba-8caf-5270-80fd-1b001776930d", 00:14:40.187 "is_configured": true, 00:14:40.187 "data_offset": 0, 00:14:40.187 "data_size": 65536 00:14:40.187 } 00:14:40.187 ] 00:14:40.187 }' 00:14:40.187 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.187 [2024-11-15 09:33:28.429358] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:40.187 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.187 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.187 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.187 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:40.187 09:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.187 09:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.187 [2024-11-15 09:33:28.515359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:40.187 [2024-11-15 09:33:28.638067] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:40.187 [2024-11-15 09:33:28.649170] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.187 [2024-11-15 09:33:28.649313] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:40.187 [2024-11-15 09:33:28.649349] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:40.446 [2024-11-15 09:33:28.687943] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:40.446 09:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.446 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:40.446 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.446 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.446 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.446 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.446 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.446 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.446 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.446 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.446 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.446 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.446 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.446 09:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.446 09:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.446 09:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.446 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.446 "name": "raid_bdev1", 00:14:40.446 "uuid": "403d0d50-415f-4cae-8762-243a13789b47", 00:14:40.446 "strip_size_kb": 0, 00:14:40.446 "state": "online", 00:14:40.446 "raid_level": "raid1", 00:14:40.446 "superblock": false, 00:14:40.446 "num_base_bdevs": 4, 00:14:40.446 "num_base_bdevs_discovered": 3, 00:14:40.446 "num_base_bdevs_operational": 3, 00:14:40.446 "base_bdevs_list": [ 00:14:40.446 { 00:14:40.446 "name": null, 00:14:40.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.446 "is_configured": false, 00:14:40.446 "data_offset": 0, 00:14:40.446 "data_size": 65536 00:14:40.446 }, 00:14:40.446 { 00:14:40.446 "name": "BaseBdev2", 00:14:40.446 "uuid": "18d62af3-bad0-5b11-9975-ac7517c79078", 00:14:40.446 "is_configured": true, 00:14:40.446 "data_offset": 0, 00:14:40.446 "data_size": 65536 00:14:40.446 }, 00:14:40.446 { 00:14:40.446 "name": "BaseBdev3", 00:14:40.446 "uuid": "7502130e-2763-503e-a5a0-4849c4ca482b", 00:14:40.446 "is_configured": true, 00:14:40.446 "data_offset": 0, 00:14:40.446 "data_size": 65536 00:14:40.446 }, 00:14:40.446 { 00:14:40.446 "name": "BaseBdev4", 00:14:40.446 "uuid": "8861a2ba-8caf-5270-80fd-1b001776930d", 00:14:40.446 "is_configured": true, 00:14:40.446 "data_offset": 0, 00:14:40.446 "data_size": 65536 00:14:40.446 } 00:14:40.446 ] 00:14:40.446 }' 00:14:40.447 09:33:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.447 09:33:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.706 133.00 IOPS, 399.00 MiB/s [2024-11-15T09:33:29.169Z] 09:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:40.706 09:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.706 09:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:40.706 09:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:40.706 09:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.706 09:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.965 09:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.965 09:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.965 09:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.965 09:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.965 09:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.965 "name": "raid_bdev1", 00:14:40.965 "uuid": "403d0d50-415f-4cae-8762-243a13789b47", 00:14:40.965 "strip_size_kb": 0, 00:14:40.965 "state": "online", 00:14:40.965 "raid_level": "raid1", 00:14:40.965 "superblock": false, 00:14:40.965 "num_base_bdevs": 4, 00:14:40.965 "num_base_bdevs_discovered": 3, 00:14:40.965 "num_base_bdevs_operational": 3, 00:14:40.965 "base_bdevs_list": [ 00:14:40.965 { 00:14:40.965 "name": null, 00:14:40.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.965 "is_configured": false, 00:14:40.965 "data_offset": 0, 00:14:40.965 "data_size": 65536 00:14:40.965 }, 00:14:40.965 { 00:14:40.965 "name": "BaseBdev2", 00:14:40.965 "uuid": "18d62af3-bad0-5b11-9975-ac7517c79078", 00:14:40.965 "is_configured": true, 00:14:40.965 "data_offset": 0, 00:14:40.965 "data_size": 65536 00:14:40.965 }, 00:14:40.965 { 00:14:40.965 "name": "BaseBdev3", 00:14:40.965 "uuid": "7502130e-2763-503e-a5a0-4849c4ca482b", 00:14:40.965 "is_configured": true, 00:14:40.965 "data_offset": 0, 00:14:40.965 "data_size": 65536 00:14:40.965 }, 00:14:40.965 { 00:14:40.965 "name": "BaseBdev4", 00:14:40.965 "uuid": "8861a2ba-8caf-5270-80fd-1b001776930d", 00:14:40.965 "is_configured": true, 00:14:40.965 "data_offset": 0, 00:14:40.965 "data_size": 65536 00:14:40.965 } 00:14:40.965 ] 00:14:40.965 }' 00:14:40.965 09:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.965 09:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:40.965 09:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.965 09:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:40.965 09:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:40.965 09:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.965 09:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.965 [2024-11-15 09:33:29.314412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:40.965 09:33:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.965 09:33:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:40.965 [2024-11-15 09:33:29.372731] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:40.965 [2024-11-15 09:33:29.374858] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:41.225 [2024-11-15 09:33:29.489618] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:41.225 [2024-11-15 09:33:29.491153] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:41.484 [2024-11-15 09:33:29.699272] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:41.484 [2024-11-15 09:33:29.699634] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:41.743 136.33 IOPS, 409.00 MiB/s [2024-11-15T09:33:30.206Z] [2024-11-15 09:33:30.036340] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:41.743 [2024-11-15 09:33:30.037865] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:42.002 [2024-11-15 09:33:30.256900] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:42.002 [2024-11-15 09:33:30.257368] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:42.002 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.002 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.002 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.002 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.002 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.002 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.002 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.002 09:33:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.002 09:33:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.003 09:33:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.003 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.003 "name": "raid_bdev1", 00:14:42.003 "uuid": "403d0d50-415f-4cae-8762-243a13789b47", 00:14:42.003 "strip_size_kb": 0, 00:14:42.003 "state": "online", 00:14:42.003 "raid_level": "raid1", 00:14:42.003 "superblock": false, 00:14:42.003 "num_base_bdevs": 4, 00:14:42.003 "num_base_bdevs_discovered": 4, 00:14:42.003 "num_base_bdevs_operational": 4, 00:14:42.003 "process": { 00:14:42.003 "type": "rebuild", 00:14:42.003 "target": "spare", 00:14:42.003 "progress": { 00:14:42.003 "blocks": 10240, 00:14:42.003 "percent": 15 00:14:42.003 } 00:14:42.003 }, 00:14:42.003 "base_bdevs_list": [ 00:14:42.003 { 00:14:42.003 "name": "spare", 00:14:42.003 "uuid": "eac56a47-6b40-5a74-a0e8-835cf1c19f9f", 00:14:42.003 "is_configured": true, 00:14:42.003 "data_offset": 0, 00:14:42.003 "data_size": 65536 00:14:42.003 }, 00:14:42.003 { 00:14:42.003 "name": "BaseBdev2", 00:14:42.003 "uuid": "18d62af3-bad0-5b11-9975-ac7517c79078", 00:14:42.003 "is_configured": true, 00:14:42.003 "data_offset": 0, 00:14:42.003 "data_size": 65536 00:14:42.003 }, 00:14:42.003 { 00:14:42.003 "name": "BaseBdev3", 00:14:42.003 "uuid": "7502130e-2763-503e-a5a0-4849c4ca482b", 00:14:42.003 "is_configured": true, 00:14:42.003 "data_offset": 0, 00:14:42.003 "data_size": 65536 00:14:42.003 }, 00:14:42.003 { 00:14:42.003 "name": "BaseBdev4", 00:14:42.003 "uuid": "8861a2ba-8caf-5270-80fd-1b001776930d", 00:14:42.003 "is_configured": true, 00:14:42.003 "data_offset": 0, 00:14:42.003 "data_size": 65536 00:14:42.003 } 00:14:42.003 ] 00:14:42.003 }' 00:14:42.003 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.003 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.003 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.262 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.262 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:42.262 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:42.262 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:42.262 [2024-11-15 09:33:30.507535] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:42.262 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:42.262 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:42.262 09:33:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.262 09:33:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.262 [2024-11-15 09:33:30.519467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:42.262 [2024-11-15 09:33:30.636464] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:42.262 [2024-11-15 09:33:30.636824] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:42.523 [2024-11-15 09:33:30.740155] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:42.523 [2024-11-15 09:33:30.740213] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:42.523 09:33:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.523 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:42.523 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:42.523 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.523 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.523 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.523 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.523 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.523 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.523 09:33:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.523 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.523 09:33:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.523 09:33:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.523 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.523 "name": "raid_bdev1", 00:14:42.523 "uuid": "403d0d50-415f-4cae-8762-243a13789b47", 00:14:42.523 "strip_size_kb": 0, 00:14:42.523 "state": "online", 00:14:42.523 "raid_level": "raid1", 00:14:42.523 "superblock": false, 00:14:42.523 "num_base_bdevs": 4, 00:14:42.523 "num_base_bdevs_discovered": 3, 00:14:42.523 "num_base_bdevs_operational": 3, 00:14:42.523 "process": { 00:14:42.523 "type": "rebuild", 00:14:42.523 "target": "spare", 00:14:42.523 "progress": { 00:14:42.523 "blocks": 16384, 00:14:42.523 "percent": 25 00:14:42.523 } 00:14:42.523 }, 00:14:42.523 "base_bdevs_list": [ 00:14:42.523 { 00:14:42.523 "name": "spare", 00:14:42.523 "uuid": "eac56a47-6b40-5a74-a0e8-835cf1c19f9f", 00:14:42.523 "is_configured": true, 00:14:42.523 "data_offset": 0, 00:14:42.523 "data_size": 65536 00:14:42.523 }, 00:14:42.523 { 00:14:42.523 "name": null, 00:14:42.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.523 "is_configured": false, 00:14:42.523 "data_offset": 0, 00:14:42.523 "data_size": 65536 00:14:42.523 }, 00:14:42.523 { 00:14:42.523 "name": "BaseBdev3", 00:14:42.523 "uuid": "7502130e-2763-503e-a5a0-4849c4ca482b", 00:14:42.523 "is_configured": true, 00:14:42.523 "data_offset": 0, 00:14:42.523 "data_size": 65536 00:14:42.523 }, 00:14:42.523 { 00:14:42.523 "name": "BaseBdev4", 00:14:42.523 "uuid": "8861a2ba-8caf-5270-80fd-1b001776930d", 00:14:42.523 "is_configured": true, 00:14:42.523 "data_offset": 0, 00:14:42.523 "data_size": 65536 00:14:42.523 } 00:14:42.523 ] 00:14:42.523 }' 00:14:42.523 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.523 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.523 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.524 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.524 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=504 00:14:42.524 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:42.524 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.524 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.524 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.524 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.524 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.524 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.524 09:33:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.524 09:33:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.524 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.524 09:33:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.524 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.524 "name": "raid_bdev1", 00:14:42.524 "uuid": "403d0d50-415f-4cae-8762-243a13789b47", 00:14:42.524 "strip_size_kb": 0, 00:14:42.524 "state": "online", 00:14:42.524 "raid_level": "raid1", 00:14:42.524 "superblock": false, 00:14:42.524 "num_base_bdevs": 4, 00:14:42.524 "num_base_bdevs_discovered": 3, 00:14:42.524 "num_base_bdevs_operational": 3, 00:14:42.524 "process": { 00:14:42.524 "type": "rebuild", 00:14:42.524 "target": "spare", 00:14:42.524 "progress": { 00:14:42.524 "blocks": 18432, 00:14:42.524 "percent": 28 00:14:42.524 } 00:14:42.524 }, 00:14:42.524 "base_bdevs_list": [ 00:14:42.524 { 00:14:42.524 "name": "spare", 00:14:42.524 "uuid": "eac56a47-6b40-5a74-a0e8-835cf1c19f9f", 00:14:42.524 "is_configured": true, 00:14:42.524 "data_offset": 0, 00:14:42.524 "data_size": 65536 00:14:42.524 }, 00:14:42.524 { 00:14:42.524 "name": null, 00:14:42.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.524 "is_configured": false, 00:14:42.524 "data_offset": 0, 00:14:42.524 "data_size": 65536 00:14:42.524 }, 00:14:42.524 { 00:14:42.524 "name": "BaseBdev3", 00:14:42.524 "uuid": "7502130e-2763-503e-a5a0-4849c4ca482b", 00:14:42.524 "is_configured": true, 00:14:42.524 "data_offset": 0, 00:14:42.524 "data_size": 65536 00:14:42.524 }, 00:14:42.524 { 00:14:42.524 "name": "BaseBdev4", 00:14:42.524 "uuid": "8861a2ba-8caf-5270-80fd-1b001776930d", 00:14:42.524 "is_configured": true, 00:14:42.524 "data_offset": 0, 00:14:42.524 "data_size": 65536 00:14:42.524 } 00:14:42.524 ] 00:14:42.524 }' 00:14:42.524 09:33:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.783 125.50 IOPS, 376.50 MiB/s [2024-11-15T09:33:31.246Z] 09:33:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.783 09:33:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.783 09:33:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.783 09:33:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:43.061 [2024-11-15 09:33:31.423451] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:43.327 [2024-11-15 09:33:31.673194] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:43.586 111.00 IOPS, 333.00 MiB/s [2024-11-15T09:33:32.049Z] 09:33:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:43.586 09:33:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.586 09:33:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.586 09:33:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.586 09:33:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.586 09:33:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.845 09:33:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.845 09:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.845 09:33:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.845 09:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.845 09:33:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.845 09:33:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.845 "name": "raid_bdev1", 00:14:43.845 "uuid": "403d0d50-415f-4cae-8762-243a13789b47", 00:14:43.845 "strip_size_kb": 0, 00:14:43.845 "state": "online", 00:14:43.845 "raid_level": "raid1", 00:14:43.845 "superblock": false, 00:14:43.845 "num_base_bdevs": 4, 00:14:43.845 "num_base_bdevs_discovered": 3, 00:14:43.845 "num_base_bdevs_operational": 3, 00:14:43.845 "process": { 00:14:43.845 "type": "rebuild", 00:14:43.845 "target": "spare", 00:14:43.845 "progress": { 00:14:43.845 "blocks": 38912, 00:14:43.845 "percent": 59 00:14:43.845 } 00:14:43.845 }, 00:14:43.845 "base_bdevs_list": [ 00:14:43.845 { 00:14:43.845 "name": "spare", 00:14:43.845 "uuid": "eac56a47-6b40-5a74-a0e8-835cf1c19f9f", 00:14:43.845 "is_configured": true, 00:14:43.845 "data_offset": 0, 00:14:43.845 "data_size": 65536 00:14:43.845 }, 00:14:43.845 { 00:14:43.845 "name": null, 00:14:43.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.845 "is_configured": false, 00:14:43.845 "data_offset": 0, 00:14:43.845 "data_size": 65536 00:14:43.845 }, 00:14:43.845 { 00:14:43.845 "name": "BaseBdev3", 00:14:43.845 "uuid": "7502130e-2763-503e-a5a0-4849c4ca482b", 00:14:43.845 "is_configured": true, 00:14:43.845 "data_offset": 0, 00:14:43.845 "data_size": 65536 00:14:43.845 }, 00:14:43.845 { 00:14:43.845 "name": "BaseBdev4", 00:14:43.845 "uuid": "8861a2ba-8caf-5270-80fd-1b001776930d", 00:14:43.845 "is_configured": true, 00:14:43.845 "data_offset": 0, 00:14:43.845 "data_size": 65536 00:14:43.845 } 00:14:43.845 ] 00:14:43.845 }' 00:14:43.845 09:33:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.845 09:33:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.845 09:33:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.845 [2024-11-15 09:33:32.169923] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:43.846 09:33:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.846 09:33:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:44.105 [2024-11-15 09:33:32.511245] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:44.363 [2024-11-15 09:33:32.734758] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:44.623 [2024-11-15 09:33:32.938647] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:44.883 99.67 IOPS, 299.00 MiB/s [2024-11-15T09:33:33.346Z] 09:33:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:44.883 09:33:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.883 09:33:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.883 09:33:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.883 09:33:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.883 09:33:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.883 09:33:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.883 09:33:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.883 09:33:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.883 09:33:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.883 09:33:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.883 09:33:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.883 "name": "raid_bdev1", 00:14:44.883 "uuid": "403d0d50-415f-4cae-8762-243a13789b47", 00:14:44.883 "strip_size_kb": 0, 00:14:44.883 "state": "online", 00:14:44.883 "raid_level": "raid1", 00:14:44.883 "superblock": false, 00:14:44.883 "num_base_bdevs": 4, 00:14:44.883 "num_base_bdevs_discovered": 3, 00:14:44.883 "num_base_bdevs_operational": 3, 00:14:44.883 "process": { 00:14:44.883 "type": "rebuild", 00:14:44.883 "target": "spare", 00:14:44.883 "progress": { 00:14:44.883 "blocks": 57344, 00:14:44.883 "percent": 87 00:14:44.883 } 00:14:44.883 }, 00:14:44.883 "base_bdevs_list": [ 00:14:44.883 { 00:14:44.883 "name": "spare", 00:14:44.883 "uuid": "eac56a47-6b40-5a74-a0e8-835cf1c19f9f", 00:14:44.883 "is_configured": true, 00:14:44.883 "data_offset": 0, 00:14:44.883 "data_size": 65536 00:14:44.883 }, 00:14:44.883 { 00:14:44.883 "name": null, 00:14:44.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.883 "is_configured": false, 00:14:44.883 "data_offset": 0, 00:14:44.883 "data_size": 65536 00:14:44.883 }, 00:14:44.883 { 00:14:44.883 "name": "BaseBdev3", 00:14:44.883 "uuid": "7502130e-2763-503e-a5a0-4849c4ca482b", 00:14:44.883 "is_configured": true, 00:14:44.883 "data_offset": 0, 00:14:44.883 "data_size": 65536 00:14:44.883 }, 00:14:44.883 { 00:14:44.883 "name": "BaseBdev4", 00:14:44.883 "uuid": "8861a2ba-8caf-5270-80fd-1b001776930d", 00:14:44.883 "is_configured": true, 00:14:44.883 "data_offset": 0, 00:14:44.883 "data_size": 65536 00:14:44.883 } 00:14:44.883 ] 00:14:44.883 }' 00:14:44.883 09:33:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.883 09:33:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:44.883 09:33:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.883 09:33:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:44.883 09:33:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:45.452 [2024-11-15 09:33:33.613394] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:45.452 [2024-11-15 09:33:33.711346] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:45.452 [2024-11-15 09:33:33.713678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.972 90.43 IOPS, 271.29 MiB/s [2024-11-15T09:33:34.435Z] 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:45.972 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.972 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.972 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.972 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.972 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.972 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.972 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.972 09:33:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.972 09:33:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.972 09:33:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.972 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.972 "name": "raid_bdev1", 00:14:45.972 "uuid": "403d0d50-415f-4cae-8762-243a13789b47", 00:14:45.972 "strip_size_kb": 0, 00:14:45.972 "state": "online", 00:14:45.972 "raid_level": "raid1", 00:14:45.972 "superblock": false, 00:14:45.972 "num_base_bdevs": 4, 00:14:45.972 "num_base_bdevs_discovered": 3, 00:14:45.972 "num_base_bdevs_operational": 3, 00:14:45.972 "base_bdevs_list": [ 00:14:45.972 { 00:14:45.972 "name": "spare", 00:14:45.972 "uuid": "eac56a47-6b40-5a74-a0e8-835cf1c19f9f", 00:14:45.972 "is_configured": true, 00:14:45.972 "data_offset": 0, 00:14:45.972 "data_size": 65536 00:14:45.972 }, 00:14:45.972 { 00:14:45.972 "name": null, 00:14:45.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.972 "is_configured": false, 00:14:45.972 "data_offset": 0, 00:14:45.972 "data_size": 65536 00:14:45.972 }, 00:14:45.972 { 00:14:45.972 "name": "BaseBdev3", 00:14:45.972 "uuid": "7502130e-2763-503e-a5a0-4849c4ca482b", 00:14:45.972 "is_configured": true, 00:14:45.972 "data_offset": 0, 00:14:45.972 "data_size": 65536 00:14:45.972 }, 00:14:45.972 { 00:14:45.972 "name": "BaseBdev4", 00:14:45.972 "uuid": "8861a2ba-8caf-5270-80fd-1b001776930d", 00:14:45.972 "is_configured": true, 00:14:45.972 "data_offset": 0, 00:14:45.972 "data_size": 65536 00:14:45.972 } 00:14:45.972 ] 00:14:45.972 }' 00:14:45.972 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.972 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:45.972 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.232 "name": "raid_bdev1", 00:14:46.232 "uuid": "403d0d50-415f-4cae-8762-243a13789b47", 00:14:46.232 "strip_size_kb": 0, 00:14:46.232 "state": "online", 00:14:46.232 "raid_level": "raid1", 00:14:46.232 "superblock": false, 00:14:46.232 "num_base_bdevs": 4, 00:14:46.232 "num_base_bdevs_discovered": 3, 00:14:46.232 "num_base_bdevs_operational": 3, 00:14:46.232 "base_bdevs_list": [ 00:14:46.232 { 00:14:46.232 "name": "spare", 00:14:46.232 "uuid": "eac56a47-6b40-5a74-a0e8-835cf1c19f9f", 00:14:46.232 "is_configured": true, 00:14:46.232 "data_offset": 0, 00:14:46.232 "data_size": 65536 00:14:46.232 }, 00:14:46.232 { 00:14:46.232 "name": null, 00:14:46.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.232 "is_configured": false, 00:14:46.232 "data_offset": 0, 00:14:46.232 "data_size": 65536 00:14:46.232 }, 00:14:46.232 { 00:14:46.232 "name": "BaseBdev3", 00:14:46.232 "uuid": "7502130e-2763-503e-a5a0-4849c4ca482b", 00:14:46.232 "is_configured": true, 00:14:46.232 "data_offset": 0, 00:14:46.232 "data_size": 65536 00:14:46.232 }, 00:14:46.232 { 00:14:46.232 "name": "BaseBdev4", 00:14:46.232 "uuid": "8861a2ba-8caf-5270-80fd-1b001776930d", 00:14:46.232 "is_configured": true, 00:14:46.232 "data_offset": 0, 00:14:46.232 "data_size": 65536 00:14:46.232 } 00:14:46.232 ] 00:14:46.232 }' 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.232 "name": "raid_bdev1", 00:14:46.232 "uuid": "403d0d50-415f-4cae-8762-243a13789b47", 00:14:46.232 "strip_size_kb": 0, 00:14:46.232 "state": "online", 00:14:46.232 "raid_level": "raid1", 00:14:46.232 "superblock": false, 00:14:46.232 "num_base_bdevs": 4, 00:14:46.232 "num_base_bdevs_discovered": 3, 00:14:46.232 "num_base_bdevs_operational": 3, 00:14:46.232 "base_bdevs_list": [ 00:14:46.232 { 00:14:46.232 "name": "spare", 00:14:46.232 "uuid": "eac56a47-6b40-5a74-a0e8-835cf1c19f9f", 00:14:46.232 "is_configured": true, 00:14:46.232 "data_offset": 0, 00:14:46.232 "data_size": 65536 00:14:46.232 }, 00:14:46.232 { 00:14:46.232 "name": null, 00:14:46.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.232 "is_configured": false, 00:14:46.232 "data_offset": 0, 00:14:46.232 "data_size": 65536 00:14:46.232 }, 00:14:46.232 { 00:14:46.232 "name": "BaseBdev3", 00:14:46.232 "uuid": "7502130e-2763-503e-a5a0-4849c4ca482b", 00:14:46.232 "is_configured": true, 00:14:46.232 "data_offset": 0, 00:14:46.232 "data_size": 65536 00:14:46.232 }, 00:14:46.232 { 00:14:46.232 "name": "BaseBdev4", 00:14:46.232 "uuid": "8861a2ba-8caf-5270-80fd-1b001776930d", 00:14:46.232 "is_configured": true, 00:14:46.232 "data_offset": 0, 00:14:46.232 "data_size": 65536 00:14:46.232 } 00:14:46.232 ] 00:14:46.232 }' 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.232 09:33:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.802 83.50 IOPS, 250.50 MiB/s [2024-11-15T09:33:35.265Z] 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:46.802 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.802 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.802 [2024-11-15 09:33:35.071847] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:46.802 [2024-11-15 09:33:35.071918] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:46.802 00:14:46.802 Latency(us) 00:14:46.802 [2024-11-15T09:33:35.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.802 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:46.802 raid_bdev1 : 8.21 81.83 245.48 0.00 0.00 16454.81 291.55 128210.17 00:14:46.802 [2024-11-15T09:33:35.265Z] =================================================================================================================== 00:14:46.802 [2024-11-15T09:33:35.265Z] Total : 81.83 245.48 0.00 0.00 16454.81 291.55 128210.17 00:14:46.802 [2024-11-15 09:33:35.194807] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.802 [2024-11-15 09:33:35.194892] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:46.802 [2024-11-15 09:33:35.195005] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:46.802 [2024-11-15 09:33:35.195020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:46.802 { 00:14:46.802 "results": [ 00:14:46.802 { 00:14:46.802 "job": "raid_bdev1", 00:14:46.802 "core_mask": "0x1", 00:14:46.802 "workload": "randrw", 00:14:46.802 "percentage": 50, 00:14:46.802 "status": "finished", 00:14:46.802 "queue_depth": 2, 00:14:46.802 "io_size": 3145728, 00:14:46.802 "runtime": 8.21263, 00:14:46.802 "iops": 81.82518876413523, 00:14:46.802 "mibps": 245.4755662924057, 00:14:46.802 "io_failed": 0, 00:14:46.802 "io_timeout": 0, 00:14:46.802 "avg_latency_us": 16454.81389062175, 00:14:46.802 "min_latency_us": 291.54934497816595, 00:14:46.802 "max_latency_us": 128210.16593886462 00:14:46.802 } 00:14:46.802 ], 00:14:46.802 "core_count": 1 00:14:46.802 } 00:14:46.803 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.803 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.803 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:46.803 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.803 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.803 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.803 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:46.803 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:46.803 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:46.803 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:46.803 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:46.803 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:46.803 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:46.803 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:46.803 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:46.803 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:46.803 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:46.803 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:46.803 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:47.066 /dev/nbd0 00:14:47.066 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:47.066 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:47.066 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:47.066 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:14:47.066 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:47.066 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:47.066 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:47.066 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:14:47.066 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:47.066 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:47.067 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:47.067 1+0 records in 00:14:47.067 1+0 records out 00:14:47.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442733 s, 9.3 MB/s 00:14:47.067 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.067 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:14:47.067 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.067 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:47.067 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:14:47.067 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:47.067 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:47.067 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:47.067 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:47.067 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:47.067 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:47.067 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:47.067 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:47.067 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:47.067 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:47.067 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:47.067 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:47.067 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:47.067 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:47.067 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:47.067 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:47.067 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:47.326 /dev/nbd1 00:14:47.326 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:47.326 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:47.326 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:47.326 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:14:47.326 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:47.326 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:47.326 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:47.585 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:14:47.585 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:47.585 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:47.585 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:47.585 1+0 records in 00:14:47.585 1+0 records out 00:14:47.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527841 s, 7.8 MB/s 00:14:47.585 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.585 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:14:47.585 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.585 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:47.585 09:33:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:14:47.585 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:47.585 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:47.585 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:47.585 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:47.585 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:47.585 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:47.585 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:47.585 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:47.585 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:47.585 09:33:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:47.844 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:47.844 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:47.844 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:47.844 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:47.844 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:47.844 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:47.844 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:47.844 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:47.844 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:47.844 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:47.844 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:47.844 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:47.844 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:47.845 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:47.845 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:47.845 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:47.845 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:47.845 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:47.845 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:47.845 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:48.103 /dev/nbd1 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:48.103 1+0 records in 00:14:48.103 1+0 records out 00:14:48.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429532 s, 9.5 MB/s 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:48.103 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:48.362 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:48.362 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:48.362 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:48.362 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:48.362 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:48.362 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:48.362 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:48.362 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:48.362 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:48.362 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:48.362 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:48.362 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:48.362 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:48.362 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:48.362 09:33:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:48.622 09:33:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:48.622 09:33:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:48.622 09:33:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:48.622 09:33:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:48.622 09:33:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:48.622 09:33:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:48.622 09:33:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:48.622 09:33:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:48.622 09:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:48.622 09:33:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79156 00:14:48.622 09:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 79156 ']' 00:14:48.622 09:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 79156 00:14:48.622 09:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:14:48.622 09:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:48.622 09:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79156 00:14:48.622 killing process with pid 79156 00:14:48.622 Received shutdown signal, test time was about 10.114758 seconds 00:14:48.622 00:14:48.622 Latency(us) 00:14:48.622 [2024-11-15T09:33:37.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.622 [2024-11-15T09:33:37.085Z] =================================================================================================================== 00:14:48.622 [2024-11-15T09:33:37.085Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:48.622 09:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:48.623 09:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:48.623 09:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79156' 00:14:48.623 09:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 79156 00:14:48.623 [2024-11-15 09:33:37.070048] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:48.623 09:33:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 79156 00:14:49.191 [2024-11-15 09:33:37.581728] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:50.572 09:33:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:50.572 00:14:50.572 real 0m13.889s 00:14:50.572 user 0m17.416s 00:14:50.572 sys 0m1.954s 00:14:50.572 09:33:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:50.572 09:33:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.572 ************************************ 00:14:50.572 END TEST raid_rebuild_test_io 00:14:50.572 ************************************ 00:14:50.572 09:33:39 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:50.572 09:33:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:50.572 09:33:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:50.572 09:33:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:50.572 ************************************ 00:14:50.572 START TEST raid_rebuild_test_sb_io 00:14:50.572 ************************************ 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true true true 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79570 00:14:50.829 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79570 00:14:50.830 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:50.830 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 79570 ']' 00:14:50.830 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.830 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:50.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.830 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.830 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:50.830 09:33:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.830 [2024-11-15 09:33:39.149204] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:14:50.830 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:50.830 Zero copy mechanism will not be used. 00:14:50.830 [2024-11-15 09:33:39.149818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79570 ] 00:14:51.088 [2024-11-15 09:33:39.311054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.088 [2024-11-15 09:33:39.444363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.348 [2024-11-15 09:33:39.681276] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.348 [2024-11-15 09:33:39.681352] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.608 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:51.608 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:14:51.608 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:51.608 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:51.608 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.608 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.868 BaseBdev1_malloc 00:14:51.868 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.868 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:51.868 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.868 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.868 [2024-11-15 09:33:40.100142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:51.868 [2024-11-15 09:33:40.100223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.868 [2024-11-15 09:33:40.100252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:51.868 [2024-11-15 09:33:40.100266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.868 [2024-11-15 09:33:40.102689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.868 [2024-11-15 09:33:40.102730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:51.868 BaseBdev1 00:14:51.868 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.868 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:51.868 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:51.868 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.868 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.868 BaseBdev2_malloc 00:14:51.868 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.868 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:51.868 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.868 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.868 [2024-11-15 09:33:40.160925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:51.868 [2024-11-15 09:33:40.160994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.868 [2024-11-15 09:33:40.161017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:51.868 [2024-11-15 09:33:40.161033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.868 [2024-11-15 09:33:40.163447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.868 [2024-11-15 09:33:40.163487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:51.868 BaseBdev2 00:14:51.868 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.868 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:51.868 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:51.868 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.868 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.868 BaseBdev3_malloc 00:14:51.868 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.868 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:51.868 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.868 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.868 [2024-11-15 09:33:40.233127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:51.868 [2024-11-15 09:33:40.233202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.869 [2024-11-15 09:33:40.233228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:51.869 [2024-11-15 09:33:40.233241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.869 [2024-11-15 09:33:40.235623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.869 [2024-11-15 09:33:40.235669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:51.869 BaseBdev3 00:14:51.869 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.869 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:51.869 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:51.869 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.869 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.869 BaseBdev4_malloc 00:14:51.869 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.869 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:51.869 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.869 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.869 [2024-11-15 09:33:40.293002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:51.869 [2024-11-15 09:33:40.293071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.869 [2024-11-15 09:33:40.293105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:51.869 [2024-11-15 09:33:40.293118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.869 [2024-11-15 09:33:40.295539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.869 [2024-11-15 09:33:40.295608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:51.869 BaseBdev4 00:14:51.869 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.869 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:51.869 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.869 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.129 spare_malloc 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.129 spare_delay 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.129 [2024-11-15 09:33:40.366791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:52.129 [2024-11-15 09:33:40.366876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.129 [2024-11-15 09:33:40.366901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:52.129 [2024-11-15 09:33:40.366915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.129 [2024-11-15 09:33:40.369375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.129 [2024-11-15 09:33:40.369416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:52.129 spare 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.129 [2024-11-15 09:33:40.378834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.129 [2024-11-15 09:33:40.380887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.129 [2024-11-15 09:33:40.380966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:52.129 [2024-11-15 09:33:40.381026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:52.129 [2024-11-15 09:33:40.381239] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:52.129 [2024-11-15 09:33:40.381268] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:52.129 [2024-11-15 09:33:40.381513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:52.129 [2024-11-15 09:33:40.381697] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:52.129 [2024-11-15 09:33:40.381715] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:52.129 [2024-11-15 09:33:40.381894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.129 "name": "raid_bdev1", 00:14:52.129 "uuid": "0454ddb2-8998-439d-b0b8-5a60c0e6d550", 00:14:52.129 "strip_size_kb": 0, 00:14:52.129 "state": "online", 00:14:52.129 "raid_level": "raid1", 00:14:52.129 "superblock": true, 00:14:52.129 "num_base_bdevs": 4, 00:14:52.129 "num_base_bdevs_discovered": 4, 00:14:52.129 "num_base_bdevs_operational": 4, 00:14:52.129 "base_bdevs_list": [ 00:14:52.129 { 00:14:52.129 "name": "BaseBdev1", 00:14:52.129 "uuid": "88e3ca36-578b-5d90-953a-68516a749ae0", 00:14:52.129 "is_configured": true, 00:14:52.129 "data_offset": 2048, 00:14:52.129 "data_size": 63488 00:14:52.129 }, 00:14:52.129 { 00:14:52.129 "name": "BaseBdev2", 00:14:52.129 "uuid": "13e0573c-8e1f-5afb-adf5-00ef98e5db02", 00:14:52.129 "is_configured": true, 00:14:52.129 "data_offset": 2048, 00:14:52.129 "data_size": 63488 00:14:52.129 }, 00:14:52.129 { 00:14:52.129 "name": "BaseBdev3", 00:14:52.129 "uuid": "4e3c234f-d461-5f83-b98a-bec68bf55beb", 00:14:52.129 "is_configured": true, 00:14:52.129 "data_offset": 2048, 00:14:52.129 "data_size": 63488 00:14:52.129 }, 00:14:52.129 { 00:14:52.129 "name": "BaseBdev4", 00:14:52.129 "uuid": "7989df5b-710f-5fb7-a70e-1251d8fe6621", 00:14:52.129 "is_configured": true, 00:14:52.129 "data_offset": 2048, 00:14:52.129 "data_size": 63488 00:14:52.129 } 00:14:52.129 ] 00:14:52.129 }' 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.129 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.390 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:52.390 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.390 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:52.390 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.390 [2024-11-15 09:33:40.834492] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.652 [2024-11-15 09:33:40.913923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.652 "name": "raid_bdev1", 00:14:52.652 "uuid": "0454ddb2-8998-439d-b0b8-5a60c0e6d550", 00:14:52.652 "strip_size_kb": 0, 00:14:52.652 "state": "online", 00:14:52.652 "raid_level": "raid1", 00:14:52.652 "superblock": true, 00:14:52.652 "num_base_bdevs": 4, 00:14:52.652 "num_base_bdevs_discovered": 3, 00:14:52.652 "num_base_bdevs_operational": 3, 00:14:52.652 "base_bdevs_list": [ 00:14:52.652 { 00:14:52.652 "name": null, 00:14:52.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.652 "is_configured": false, 00:14:52.652 "data_offset": 0, 00:14:52.652 "data_size": 63488 00:14:52.652 }, 00:14:52.652 { 00:14:52.652 "name": "BaseBdev2", 00:14:52.652 "uuid": "13e0573c-8e1f-5afb-adf5-00ef98e5db02", 00:14:52.652 "is_configured": true, 00:14:52.652 "data_offset": 2048, 00:14:52.652 "data_size": 63488 00:14:52.652 }, 00:14:52.652 { 00:14:52.652 "name": "BaseBdev3", 00:14:52.652 "uuid": "4e3c234f-d461-5f83-b98a-bec68bf55beb", 00:14:52.652 "is_configured": true, 00:14:52.652 "data_offset": 2048, 00:14:52.652 "data_size": 63488 00:14:52.652 }, 00:14:52.652 { 00:14:52.652 "name": "BaseBdev4", 00:14:52.652 "uuid": "7989df5b-710f-5fb7-a70e-1251d8fe6621", 00:14:52.652 "is_configured": true, 00:14:52.652 "data_offset": 2048, 00:14:52.652 "data_size": 63488 00:14:52.652 } 00:14:52.652 ] 00:14:52.652 }' 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.652 09:33:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.652 [2024-11-15 09:33:41.023074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:52.652 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:52.652 Zero copy mechanism will not be used. 00:14:52.652 Running I/O for 60 seconds... 00:14:53.222 09:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:53.222 09:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.222 09:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.222 [2024-11-15 09:33:41.404544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:53.222 09:33:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.222 09:33:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:53.222 [2024-11-15 09:33:41.445229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:53.222 [2024-11-15 09:33:41.447273] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:53.222 [2024-11-15 09:33:41.581364] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:53.222 [2024-11-15 09:33:41.581896] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:53.482 [2024-11-15 09:33:41.714074] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:53.741 150.00 IOPS, 450.00 MiB/s [2024-11-15T09:33:42.204Z] [2024-11-15 09:33:42.081629] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:54.000 [2024-11-15 09:33:42.317904] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:54.000 [2024-11-15 09:33:42.318246] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:54.000 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.000 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.000 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.000 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.000 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.000 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.000 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.000 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.001 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.260 "name": "raid_bdev1", 00:14:54.260 "uuid": "0454ddb2-8998-439d-b0b8-5a60c0e6d550", 00:14:54.260 "strip_size_kb": 0, 00:14:54.260 "state": "online", 00:14:54.260 "raid_level": "raid1", 00:14:54.260 "superblock": true, 00:14:54.260 "num_base_bdevs": 4, 00:14:54.260 "num_base_bdevs_discovered": 4, 00:14:54.260 "num_base_bdevs_operational": 4, 00:14:54.260 "process": { 00:14:54.260 "type": "rebuild", 00:14:54.260 "target": "spare", 00:14:54.260 "progress": { 00:14:54.260 "blocks": 10240, 00:14:54.260 "percent": 16 00:14:54.260 } 00:14:54.260 }, 00:14:54.260 "base_bdevs_list": [ 00:14:54.260 { 00:14:54.260 "name": "spare", 00:14:54.260 "uuid": "334bf159-82c6-55e2-9eac-ae748d70dce8", 00:14:54.260 "is_configured": true, 00:14:54.260 "data_offset": 2048, 00:14:54.260 "data_size": 63488 00:14:54.260 }, 00:14:54.260 { 00:14:54.260 "name": "BaseBdev2", 00:14:54.260 "uuid": "13e0573c-8e1f-5afb-adf5-00ef98e5db02", 00:14:54.260 "is_configured": true, 00:14:54.260 "data_offset": 2048, 00:14:54.260 "data_size": 63488 00:14:54.260 }, 00:14:54.260 { 00:14:54.260 "name": "BaseBdev3", 00:14:54.260 "uuid": "4e3c234f-d461-5f83-b98a-bec68bf55beb", 00:14:54.260 "is_configured": true, 00:14:54.260 "data_offset": 2048, 00:14:54.260 "data_size": 63488 00:14:54.260 }, 00:14:54.260 { 00:14:54.260 "name": "BaseBdev4", 00:14:54.260 "uuid": "7989df5b-710f-5fb7-a70e-1251d8fe6621", 00:14:54.260 "is_configured": true, 00:14:54.260 "data_offset": 2048, 00:14:54.260 "data_size": 63488 00:14:54.260 } 00:14:54.260 ] 00:14:54.260 }' 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.260 [2024-11-15 09:33:42.595093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:54.260 [2024-11-15 09:33:42.595702] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:54.260 [2024-11-15 09:33:42.609064] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:54.260 [2024-11-15 09:33:42.625308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.260 [2024-11-15 09:33:42.625372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:54.260 [2024-11-15 09:33:42.625396] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:54.260 [2024-11-15 09:33:42.660907] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.260 "name": "raid_bdev1", 00:14:54.260 "uuid": "0454ddb2-8998-439d-b0b8-5a60c0e6d550", 00:14:54.260 "strip_size_kb": 0, 00:14:54.260 "state": "online", 00:14:54.260 "raid_level": "raid1", 00:14:54.260 "superblock": true, 00:14:54.260 "num_base_bdevs": 4, 00:14:54.260 "num_base_bdevs_discovered": 3, 00:14:54.260 "num_base_bdevs_operational": 3, 00:14:54.260 "base_bdevs_list": [ 00:14:54.260 { 00:14:54.260 "name": null, 00:14:54.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.260 "is_configured": false, 00:14:54.260 "data_offset": 0, 00:14:54.260 "data_size": 63488 00:14:54.260 }, 00:14:54.260 { 00:14:54.260 "name": "BaseBdev2", 00:14:54.260 "uuid": "13e0573c-8e1f-5afb-adf5-00ef98e5db02", 00:14:54.260 "is_configured": true, 00:14:54.260 "data_offset": 2048, 00:14:54.260 "data_size": 63488 00:14:54.260 }, 00:14:54.260 { 00:14:54.260 "name": "BaseBdev3", 00:14:54.260 "uuid": "4e3c234f-d461-5f83-b98a-bec68bf55beb", 00:14:54.260 "is_configured": true, 00:14:54.260 "data_offset": 2048, 00:14:54.260 "data_size": 63488 00:14:54.260 }, 00:14:54.260 { 00:14:54.260 "name": "BaseBdev4", 00:14:54.260 "uuid": "7989df5b-710f-5fb7-a70e-1251d8fe6621", 00:14:54.260 "is_configured": true, 00:14:54.260 "data_offset": 2048, 00:14:54.260 "data_size": 63488 00:14:54.260 } 00:14:54.260 ] 00:14:54.260 }' 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.260 09:33:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.831 140.50 IOPS, 421.50 MiB/s [2024-11-15T09:33:43.294Z] 09:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:54.831 09:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.831 09:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:54.831 09:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:54.831 09:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.831 09:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.831 09:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.831 09:33:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.831 09:33:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.831 09:33:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.831 09:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.831 "name": "raid_bdev1", 00:14:54.831 "uuid": "0454ddb2-8998-439d-b0b8-5a60c0e6d550", 00:14:54.832 "strip_size_kb": 0, 00:14:54.832 "state": "online", 00:14:54.832 "raid_level": "raid1", 00:14:54.832 "superblock": true, 00:14:54.832 "num_base_bdevs": 4, 00:14:54.832 "num_base_bdevs_discovered": 3, 00:14:54.832 "num_base_bdevs_operational": 3, 00:14:54.832 "base_bdevs_list": [ 00:14:54.832 { 00:14:54.832 "name": null, 00:14:54.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.832 "is_configured": false, 00:14:54.832 "data_offset": 0, 00:14:54.832 "data_size": 63488 00:14:54.832 }, 00:14:54.832 { 00:14:54.832 "name": "BaseBdev2", 00:14:54.832 "uuid": "13e0573c-8e1f-5afb-adf5-00ef98e5db02", 00:14:54.832 "is_configured": true, 00:14:54.832 "data_offset": 2048, 00:14:54.832 "data_size": 63488 00:14:54.832 }, 00:14:54.832 { 00:14:54.832 "name": "BaseBdev3", 00:14:54.832 "uuid": "4e3c234f-d461-5f83-b98a-bec68bf55beb", 00:14:54.832 "is_configured": true, 00:14:54.832 "data_offset": 2048, 00:14:54.832 "data_size": 63488 00:14:54.832 }, 00:14:54.832 { 00:14:54.832 "name": "BaseBdev4", 00:14:54.832 "uuid": "7989df5b-710f-5fb7-a70e-1251d8fe6621", 00:14:54.832 "is_configured": true, 00:14:54.832 "data_offset": 2048, 00:14:54.832 "data_size": 63488 00:14:54.832 } 00:14:54.832 ] 00:14:54.832 }' 00:14:54.832 09:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.832 09:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:54.832 09:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.832 09:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:54.832 09:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:54.832 09:33:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.832 09:33:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.832 [2024-11-15 09:33:43.283743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:55.092 09:33:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.092 09:33:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:55.092 [2024-11-15 09:33:43.337739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:55.093 [2024-11-15 09:33:43.339763] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:55.093 [2024-11-15 09:33:43.441931] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:55.093 [2024-11-15 09:33:43.442542] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:55.352 [2024-11-15 09:33:43.654632] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:55.352 [2024-11-15 09:33:43.655016] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:55.610 [2024-11-15 09:33:43.908930] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:55.870 138.00 IOPS, 414.00 MiB/s [2024-11-15T09:33:44.333Z] [2024-11-15 09:33:44.297839] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:55.870 [2024-11-15 09:33:44.309116] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:55.870 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.870 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.870 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.870 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.870 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.870 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.870 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.870 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.870 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.129 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.129 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.129 "name": "raid_bdev1", 00:14:56.129 "uuid": "0454ddb2-8998-439d-b0b8-5a60c0e6d550", 00:14:56.129 "strip_size_kb": 0, 00:14:56.129 "state": "online", 00:14:56.129 "raid_level": "raid1", 00:14:56.129 "superblock": true, 00:14:56.129 "num_base_bdevs": 4, 00:14:56.129 "num_base_bdevs_discovered": 4, 00:14:56.129 "num_base_bdevs_operational": 4, 00:14:56.129 "process": { 00:14:56.129 "type": "rebuild", 00:14:56.129 "target": "spare", 00:14:56.129 "progress": { 00:14:56.129 "blocks": 14336, 00:14:56.129 "percent": 22 00:14:56.129 } 00:14:56.129 }, 00:14:56.129 "base_bdevs_list": [ 00:14:56.129 { 00:14:56.129 "name": "spare", 00:14:56.129 "uuid": "334bf159-82c6-55e2-9eac-ae748d70dce8", 00:14:56.129 "is_configured": true, 00:14:56.129 "data_offset": 2048, 00:14:56.129 "data_size": 63488 00:14:56.129 }, 00:14:56.129 { 00:14:56.129 "name": "BaseBdev2", 00:14:56.129 "uuid": "13e0573c-8e1f-5afb-adf5-00ef98e5db02", 00:14:56.129 "is_configured": true, 00:14:56.129 "data_offset": 2048, 00:14:56.129 "data_size": 63488 00:14:56.129 }, 00:14:56.129 { 00:14:56.129 "name": "BaseBdev3", 00:14:56.129 "uuid": "4e3c234f-d461-5f83-b98a-bec68bf55beb", 00:14:56.129 "is_configured": true, 00:14:56.129 "data_offset": 2048, 00:14:56.129 "data_size": 63488 00:14:56.129 }, 00:14:56.129 { 00:14:56.129 "name": "BaseBdev4", 00:14:56.129 "uuid": "7989df5b-710f-5fb7-a70e-1251d8fe6621", 00:14:56.129 "is_configured": true, 00:14:56.129 "data_offset": 2048, 00:14:56.129 "data_size": 63488 00:14:56.129 } 00:14:56.129 ] 00:14:56.129 }' 00:14:56.129 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.129 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.129 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.129 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.130 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:56.130 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:56.130 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:56.130 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:56.130 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:56.130 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:56.130 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:56.130 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.130 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.130 [2024-11-15 09:33:44.463069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:56.130 [2024-11-15 09:33:44.574992] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:56.389 [2024-11-15 09:33:44.787205] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:56.389 [2024-11-15 09:33:44.787274] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:56.389 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.389 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:56.389 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:56.389 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.389 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.389 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.389 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.389 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.389 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.389 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.389 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.389 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.389 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.389 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.389 "name": "raid_bdev1", 00:14:56.389 "uuid": "0454ddb2-8998-439d-b0b8-5a60c0e6d550", 00:14:56.389 "strip_size_kb": 0, 00:14:56.389 "state": "online", 00:14:56.389 "raid_level": "raid1", 00:14:56.389 "superblock": true, 00:14:56.389 "num_base_bdevs": 4, 00:14:56.389 "num_base_bdevs_discovered": 3, 00:14:56.389 "num_base_bdevs_operational": 3, 00:14:56.389 "process": { 00:14:56.389 "type": "rebuild", 00:14:56.389 "target": "spare", 00:14:56.389 "progress": { 00:14:56.389 "blocks": 16384, 00:14:56.389 "percent": 25 00:14:56.389 } 00:14:56.389 }, 00:14:56.389 "base_bdevs_list": [ 00:14:56.389 { 00:14:56.389 "name": "spare", 00:14:56.389 "uuid": "334bf159-82c6-55e2-9eac-ae748d70dce8", 00:14:56.389 "is_configured": true, 00:14:56.389 "data_offset": 2048, 00:14:56.389 "data_size": 63488 00:14:56.389 }, 00:14:56.389 { 00:14:56.389 "name": null, 00:14:56.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.389 "is_configured": false, 00:14:56.389 "data_offset": 0, 00:14:56.389 "data_size": 63488 00:14:56.389 }, 00:14:56.389 { 00:14:56.389 "name": "BaseBdev3", 00:14:56.389 "uuid": "4e3c234f-d461-5f83-b98a-bec68bf55beb", 00:14:56.389 "is_configured": true, 00:14:56.389 "data_offset": 2048, 00:14:56.389 "data_size": 63488 00:14:56.389 }, 00:14:56.389 { 00:14:56.389 "name": "BaseBdev4", 00:14:56.389 "uuid": "7989df5b-710f-5fb7-a70e-1251d8fe6621", 00:14:56.389 "is_configured": true, 00:14:56.389 "data_offset": 2048, 00:14:56.389 "data_size": 63488 00:14:56.389 } 00:14:56.389 ] 00:14:56.389 }' 00:14:56.389 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.648 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.648 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.648 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.648 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=518 00:14:56.648 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:56.648 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.648 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.648 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.648 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.648 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.648 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.648 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.648 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.648 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.648 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.648 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.648 "name": "raid_bdev1", 00:14:56.648 "uuid": "0454ddb2-8998-439d-b0b8-5a60c0e6d550", 00:14:56.648 "strip_size_kb": 0, 00:14:56.648 "state": "online", 00:14:56.648 "raid_level": "raid1", 00:14:56.648 "superblock": true, 00:14:56.648 "num_base_bdevs": 4, 00:14:56.648 "num_base_bdevs_discovered": 3, 00:14:56.648 "num_base_bdevs_operational": 3, 00:14:56.648 "process": { 00:14:56.648 "type": "rebuild", 00:14:56.648 "target": "spare", 00:14:56.648 "progress": { 00:14:56.648 "blocks": 18432, 00:14:56.648 "percent": 29 00:14:56.648 } 00:14:56.648 }, 00:14:56.648 "base_bdevs_list": [ 00:14:56.648 { 00:14:56.648 "name": "spare", 00:14:56.648 "uuid": "334bf159-82c6-55e2-9eac-ae748d70dce8", 00:14:56.648 "is_configured": true, 00:14:56.648 "data_offset": 2048, 00:14:56.648 "data_size": 63488 00:14:56.648 }, 00:14:56.649 { 00:14:56.649 "name": null, 00:14:56.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.649 "is_configured": false, 00:14:56.649 "data_offset": 0, 00:14:56.649 "data_size": 63488 00:14:56.649 }, 00:14:56.649 { 00:14:56.649 "name": "BaseBdev3", 00:14:56.649 "uuid": "4e3c234f-d461-5f83-b98a-bec68bf55beb", 00:14:56.649 "is_configured": true, 00:14:56.649 "data_offset": 2048, 00:14:56.649 "data_size": 63488 00:14:56.649 }, 00:14:56.649 { 00:14:56.649 "name": "BaseBdev4", 00:14:56.649 "uuid": "7989df5b-710f-5fb7-a70e-1251d8fe6621", 00:14:56.649 "is_configured": true, 00:14:56.649 "data_offset": 2048, 00:14:56.649 "data_size": 63488 00:14:56.649 } 00:14:56.649 ] 00:14:56.649 }' 00:14:56.649 09:33:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.649 124.50 IOPS, 373.50 MiB/s [2024-11-15T09:33:45.112Z] 09:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.649 09:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.649 [2024-11-15 09:33:45.066404] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:56.649 09:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.649 09:33:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:56.907 [2024-11-15 09:33:45.276953] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:56.907 [2024-11-15 09:33:45.277280] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:57.167 [2024-11-15 09:33:45.509120] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:57.167 [2024-11-15 09:33:45.510258] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:57.426 [2024-11-15 09:33:45.732317] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:57.685 108.80 IOPS, 326.40 MiB/s [2024-11-15T09:33:46.148Z] 09:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:57.685 09:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:57.685 09:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.685 09:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:57.685 09:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:57.685 09:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.685 09:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.685 09:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.685 09:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.685 09:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.685 09:33:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.685 09:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.685 "name": "raid_bdev1", 00:14:57.685 "uuid": "0454ddb2-8998-439d-b0b8-5a60c0e6d550", 00:14:57.685 "strip_size_kb": 0, 00:14:57.685 "state": "online", 00:14:57.685 "raid_level": "raid1", 00:14:57.685 "superblock": true, 00:14:57.685 "num_base_bdevs": 4, 00:14:57.685 "num_base_bdevs_discovered": 3, 00:14:57.685 "num_base_bdevs_operational": 3, 00:14:57.685 "process": { 00:14:57.685 "type": "rebuild", 00:14:57.685 "target": "spare", 00:14:57.685 "progress": { 00:14:57.685 "blocks": 32768, 00:14:57.685 "percent": 51 00:14:57.685 } 00:14:57.685 }, 00:14:57.685 "base_bdevs_list": [ 00:14:57.685 { 00:14:57.685 "name": "spare", 00:14:57.685 "uuid": "334bf159-82c6-55e2-9eac-ae748d70dce8", 00:14:57.685 "is_configured": true, 00:14:57.685 "data_offset": 2048, 00:14:57.685 "data_size": 63488 00:14:57.685 }, 00:14:57.685 { 00:14:57.685 "name": null, 00:14:57.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.685 "is_configured": false, 00:14:57.685 "data_offset": 0, 00:14:57.685 "data_size": 63488 00:14:57.685 }, 00:14:57.685 { 00:14:57.685 "name": "BaseBdev3", 00:14:57.685 "uuid": "4e3c234f-d461-5f83-b98a-bec68bf55beb", 00:14:57.685 "is_configured": true, 00:14:57.685 "data_offset": 2048, 00:14:57.685 "data_size": 63488 00:14:57.685 }, 00:14:57.685 { 00:14:57.685 "name": "BaseBdev4", 00:14:57.685 "uuid": "7989df5b-710f-5fb7-a70e-1251d8fe6621", 00:14:57.685 "is_configured": true, 00:14:57.685 "data_offset": 2048, 00:14:57.685 "data_size": 63488 00:14:57.685 } 00:14:57.685 ] 00:14:57.685 }' 00:14:57.685 09:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.944 [2024-11-15 09:33:46.176184] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:57.944 09:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:57.944 09:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.944 09:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:57.944 09:33:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:58.203 [2024-11-15 09:33:46.507661] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:58.770 [2024-11-15 09:33:46.976491] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:59.029 95.83 IOPS, 287.50 MiB/s [2024-11-15T09:33:47.492Z] 09:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:59.029 09:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:59.029 09:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.029 09:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:59.029 09:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:59.029 09:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.029 09:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.029 09:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.029 09:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.029 09:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.029 09:33:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.029 09:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.029 "name": "raid_bdev1", 00:14:59.029 "uuid": "0454ddb2-8998-439d-b0b8-5a60c0e6d550", 00:14:59.029 "strip_size_kb": 0, 00:14:59.029 "state": "online", 00:14:59.029 "raid_level": "raid1", 00:14:59.029 "superblock": true, 00:14:59.029 "num_base_bdevs": 4, 00:14:59.029 "num_base_bdevs_discovered": 3, 00:14:59.029 "num_base_bdevs_operational": 3, 00:14:59.029 "process": { 00:14:59.029 "type": "rebuild", 00:14:59.029 "target": "spare", 00:14:59.029 "progress": { 00:14:59.029 "blocks": 51200, 00:14:59.029 "percent": 80 00:14:59.029 } 00:14:59.029 }, 00:14:59.029 "base_bdevs_list": [ 00:14:59.029 { 00:14:59.029 "name": "spare", 00:14:59.029 "uuid": "334bf159-82c6-55e2-9eac-ae748d70dce8", 00:14:59.029 "is_configured": true, 00:14:59.029 "data_offset": 2048, 00:14:59.029 "data_size": 63488 00:14:59.029 }, 00:14:59.029 { 00:14:59.029 "name": null, 00:14:59.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.029 "is_configured": false, 00:14:59.029 "data_offset": 0, 00:14:59.029 "data_size": 63488 00:14:59.029 }, 00:14:59.029 { 00:14:59.029 "name": "BaseBdev3", 00:14:59.029 "uuid": "4e3c234f-d461-5f83-b98a-bec68bf55beb", 00:14:59.029 "is_configured": true, 00:14:59.029 "data_offset": 2048, 00:14:59.029 "data_size": 63488 00:14:59.029 }, 00:14:59.029 { 00:14:59.029 "name": "BaseBdev4", 00:14:59.029 "uuid": "7989df5b-710f-5fb7-a70e-1251d8fe6621", 00:14:59.029 "is_configured": true, 00:14:59.029 "data_offset": 2048, 00:14:59.029 "data_size": 63488 00:14:59.029 } 00:14:59.029 ] 00:14:59.029 }' 00:14:59.029 09:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.029 09:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:59.029 09:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.029 [2024-11-15 09:33:47.337244] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:59.029 09:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:59.029 09:33:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:59.288 [2024-11-15 09:33:47.685507] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:59.857 87.71 IOPS, 263.14 MiB/s [2024-11-15T09:33:48.320Z] [2024-11-15 09:33:48.030072] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:59.857 [2024-11-15 09:33:48.136387] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:59.857 [2024-11-15 09:33:48.141118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.118 "name": "raid_bdev1", 00:15:00.118 "uuid": "0454ddb2-8998-439d-b0b8-5a60c0e6d550", 00:15:00.118 "strip_size_kb": 0, 00:15:00.118 "state": "online", 00:15:00.118 "raid_level": "raid1", 00:15:00.118 "superblock": true, 00:15:00.118 "num_base_bdevs": 4, 00:15:00.118 "num_base_bdevs_discovered": 3, 00:15:00.118 "num_base_bdevs_operational": 3, 00:15:00.118 "base_bdevs_list": [ 00:15:00.118 { 00:15:00.118 "name": "spare", 00:15:00.118 "uuid": "334bf159-82c6-55e2-9eac-ae748d70dce8", 00:15:00.118 "is_configured": true, 00:15:00.118 "data_offset": 2048, 00:15:00.118 "data_size": 63488 00:15:00.118 }, 00:15:00.118 { 00:15:00.118 "name": null, 00:15:00.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.118 "is_configured": false, 00:15:00.118 "data_offset": 0, 00:15:00.118 "data_size": 63488 00:15:00.118 }, 00:15:00.118 { 00:15:00.118 "name": "BaseBdev3", 00:15:00.118 "uuid": "4e3c234f-d461-5f83-b98a-bec68bf55beb", 00:15:00.118 "is_configured": true, 00:15:00.118 "data_offset": 2048, 00:15:00.118 "data_size": 63488 00:15:00.118 }, 00:15:00.118 { 00:15:00.118 "name": "BaseBdev4", 00:15:00.118 "uuid": "7989df5b-710f-5fb7-a70e-1251d8fe6621", 00:15:00.118 "is_configured": true, 00:15:00.118 "data_offset": 2048, 00:15:00.118 "data_size": 63488 00:15:00.118 } 00:15:00.118 ] 00:15:00.118 }' 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.118 "name": "raid_bdev1", 00:15:00.118 "uuid": "0454ddb2-8998-439d-b0b8-5a60c0e6d550", 00:15:00.118 "strip_size_kb": 0, 00:15:00.118 "state": "online", 00:15:00.118 "raid_level": "raid1", 00:15:00.118 "superblock": true, 00:15:00.118 "num_base_bdevs": 4, 00:15:00.118 "num_base_bdevs_discovered": 3, 00:15:00.118 "num_base_bdevs_operational": 3, 00:15:00.118 "base_bdevs_list": [ 00:15:00.118 { 00:15:00.118 "name": "spare", 00:15:00.118 "uuid": "334bf159-82c6-55e2-9eac-ae748d70dce8", 00:15:00.118 "is_configured": true, 00:15:00.118 "data_offset": 2048, 00:15:00.118 "data_size": 63488 00:15:00.118 }, 00:15:00.118 { 00:15:00.118 "name": null, 00:15:00.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.118 "is_configured": false, 00:15:00.118 "data_offset": 0, 00:15:00.118 "data_size": 63488 00:15:00.118 }, 00:15:00.118 { 00:15:00.118 "name": "BaseBdev3", 00:15:00.118 "uuid": "4e3c234f-d461-5f83-b98a-bec68bf55beb", 00:15:00.118 "is_configured": true, 00:15:00.118 "data_offset": 2048, 00:15:00.118 "data_size": 63488 00:15:00.118 }, 00:15:00.118 { 00:15:00.118 "name": "BaseBdev4", 00:15:00.118 "uuid": "7989df5b-710f-5fb7-a70e-1251d8fe6621", 00:15:00.118 "is_configured": true, 00:15:00.118 "data_offset": 2048, 00:15:00.118 "data_size": 63488 00:15:00.118 } 00:15:00.118 ] 00:15:00.118 }' 00:15:00.118 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.378 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:00.378 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.378 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:00.378 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:00.378 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:00.378 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.378 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.378 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.378 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.378 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.378 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.378 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.378 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.378 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.378 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.378 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.378 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.378 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.378 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.378 "name": "raid_bdev1", 00:15:00.378 "uuid": "0454ddb2-8998-439d-b0b8-5a60c0e6d550", 00:15:00.378 "strip_size_kb": 0, 00:15:00.378 "state": "online", 00:15:00.378 "raid_level": "raid1", 00:15:00.378 "superblock": true, 00:15:00.378 "num_base_bdevs": 4, 00:15:00.378 "num_base_bdevs_discovered": 3, 00:15:00.378 "num_base_bdevs_operational": 3, 00:15:00.378 "base_bdevs_list": [ 00:15:00.378 { 00:15:00.378 "name": "spare", 00:15:00.378 "uuid": "334bf159-82c6-55e2-9eac-ae748d70dce8", 00:15:00.378 "is_configured": true, 00:15:00.378 "data_offset": 2048, 00:15:00.378 "data_size": 63488 00:15:00.378 }, 00:15:00.378 { 00:15:00.378 "name": null, 00:15:00.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.379 "is_configured": false, 00:15:00.379 "data_offset": 0, 00:15:00.379 "data_size": 63488 00:15:00.379 }, 00:15:00.379 { 00:15:00.379 "name": "BaseBdev3", 00:15:00.379 "uuid": "4e3c234f-d461-5f83-b98a-bec68bf55beb", 00:15:00.379 "is_configured": true, 00:15:00.379 "data_offset": 2048, 00:15:00.379 "data_size": 63488 00:15:00.379 }, 00:15:00.379 { 00:15:00.379 "name": "BaseBdev4", 00:15:00.379 "uuid": "7989df5b-710f-5fb7-a70e-1251d8fe6621", 00:15:00.379 "is_configured": true, 00:15:00.379 "data_offset": 2048, 00:15:00.379 "data_size": 63488 00:15:00.379 } 00:15:00.379 ] 00:15:00.379 }' 00:15:00.379 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.379 09:33:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.639 81.12 IOPS, 243.38 MiB/s [2024-11-15T09:33:49.102Z] 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:00.639 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.639 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.898 [2024-11-15 09:33:49.106994] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:00.898 [2024-11-15 09:33:49.107039] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:00.898 00:15:00.898 Latency(us) 00:15:00.898 [2024-11-15T09:33:49.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.898 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:00.898 raid_bdev1 : 8.13 80.21 240.62 0.00 0.00 16115.18 332.69 123631.23 00:15:00.898 [2024-11-15T09:33:49.361Z] =================================================================================================================== 00:15:00.898 [2024-11-15T09:33:49.361Z] Total : 80.21 240.62 0.00 0.00 16115.18 332.69 123631.23 00:15:00.898 [2024-11-15 09:33:49.161554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.898 [2024-11-15 09:33:49.161613] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.899 [2024-11-15 09:33:49.161717] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:00.899 [2024-11-15 09:33:49.161732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:00.899 { 00:15:00.899 "results": [ 00:15:00.899 { 00:15:00.899 "job": "raid_bdev1", 00:15:00.899 "core_mask": "0x1", 00:15:00.899 "workload": "randrw", 00:15:00.899 "percentage": 50, 00:15:00.899 "status": "finished", 00:15:00.899 "queue_depth": 2, 00:15:00.899 "io_size": 3145728, 00:15:00.899 "runtime": 8.129084, 00:15:00.899 "iops": 80.2058386898204, 00:15:00.899 "mibps": 240.6175160694612, 00:15:00.899 "io_failed": 0, 00:15:00.899 "io_timeout": 0, 00:15:00.899 "avg_latency_us": 16115.18045382699, 00:15:00.899 "min_latency_us": 332.6882096069869, 00:15:00.899 "max_latency_us": 123631.23144104803 00:15:00.899 } 00:15:00.899 ], 00:15:00.899 "core_count": 1 00:15:00.899 } 00:15:00.899 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.899 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.899 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.899 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.899 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:00.899 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.899 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:00.899 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:00.899 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:00.899 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:00.899 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:00.899 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:00.899 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:00.899 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:00.899 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:00.899 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:00.899 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:00.899 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:00.899 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:01.158 /dev/nbd0 00:15:01.158 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:01.158 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:01.158 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:01.158 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:15:01.158 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:01.158 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:01.158 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:01.158 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:15:01.158 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:01.158 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:01.158 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:01.158 1+0 records in 00:15:01.158 1+0 records out 00:15:01.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231044 s, 17.7 MB/s 00:15:01.158 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.158 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:15:01.158 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.158 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:01.159 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:15:01.159 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:01.159 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:01.159 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:01.159 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:01.159 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:01.159 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:01.159 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:01.159 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:01.159 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:01.159 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:01.159 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:01.159 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:01.159 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:01.159 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:01.159 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:01.159 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:01.159 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:01.418 /dev/nbd1 00:15:01.418 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:01.418 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:01.418 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:01.418 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:15:01.418 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:01.418 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:01.418 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:01.418 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:15:01.418 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:01.418 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:01.418 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:01.418 1+0 records in 00:15:01.418 1+0 records out 00:15:01.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298758 s, 13.7 MB/s 00:15:01.418 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.418 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:15:01.418 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.418 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:01.418 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:15:01.418 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:01.418 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:01.418 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:01.677 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:01.677 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:01.677 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:01.677 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:01.677 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:01.677 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:01.677 09:33:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:01.677 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:01.935 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:01.935 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:01.935 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:01.935 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:01.935 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:01.935 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:01.935 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:01.935 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:01.935 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:01.935 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:01.935 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:01.935 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:01.935 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:01.935 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:01.935 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:01.935 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:01.935 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:01.935 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:01.935 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:01.935 /dev/nbd1 00:15:01.935 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:02.194 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:02.195 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:02.195 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:15:02.195 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:02.195 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:02.195 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:02.195 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:15:02.195 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:02.195 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:02.195 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:02.195 1+0 records in 00:15:02.195 1+0 records out 00:15:02.195 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429207 s, 9.5 MB/s 00:15:02.195 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.195 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:15:02.195 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.195 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:02.195 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:15:02.195 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:02.195 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:02.195 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:02.195 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:02.195 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:02.195 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:02.195 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:02.195 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:02.195 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:02.195 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:02.454 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:02.454 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:02.454 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:02.454 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:02.454 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:02.454 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:02.454 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:02.454 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:02.454 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:02.454 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:02.454 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:02.454 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:02.454 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:02.454 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:02.454 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:02.713 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:02.713 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:02.713 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:02.713 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:02.713 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:02.713 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:02.713 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:02.713 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:02.713 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:02.713 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:02.713 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.713 09:33:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.713 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.713 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:02.713 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.713 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.713 [2024-11-15 09:33:51.013305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:02.713 [2024-11-15 09:33:51.013376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.713 [2024-11-15 09:33:51.013396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:02.713 [2024-11-15 09:33:51.013416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.713 [2024-11-15 09:33:51.015606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.713 [2024-11-15 09:33:51.015647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:02.713 [2024-11-15 09:33:51.015733] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:02.713 [2024-11-15 09:33:51.015790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:02.713 [2024-11-15 09:33:51.015947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:02.713 [2024-11-15 09:33:51.016054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:02.713 spare 00:15:02.713 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.713 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:02.713 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.713 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.713 [2024-11-15 09:33:51.115983] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:02.713 [2024-11-15 09:33:51.116043] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:02.713 [2024-11-15 09:33:51.116410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:15:02.713 [2024-11-15 09:33:51.116665] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:02.713 [2024-11-15 09:33:51.116692] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:02.713 [2024-11-15 09:33:51.116916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.713 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.713 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:02.713 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.713 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.713 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.713 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.713 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.713 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.713 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.713 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.713 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.713 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.713 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.713 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.713 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.713 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.714 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.714 "name": "raid_bdev1", 00:15:02.714 "uuid": "0454ddb2-8998-439d-b0b8-5a60c0e6d550", 00:15:02.714 "strip_size_kb": 0, 00:15:02.714 "state": "online", 00:15:02.714 "raid_level": "raid1", 00:15:02.714 "superblock": true, 00:15:02.714 "num_base_bdevs": 4, 00:15:02.714 "num_base_bdevs_discovered": 3, 00:15:02.714 "num_base_bdevs_operational": 3, 00:15:02.714 "base_bdevs_list": [ 00:15:02.714 { 00:15:02.714 "name": "spare", 00:15:02.714 "uuid": "334bf159-82c6-55e2-9eac-ae748d70dce8", 00:15:02.714 "is_configured": true, 00:15:02.714 "data_offset": 2048, 00:15:02.714 "data_size": 63488 00:15:02.714 }, 00:15:02.714 { 00:15:02.714 "name": null, 00:15:02.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.714 "is_configured": false, 00:15:02.714 "data_offset": 2048, 00:15:02.714 "data_size": 63488 00:15:02.714 }, 00:15:02.714 { 00:15:02.714 "name": "BaseBdev3", 00:15:02.714 "uuid": "4e3c234f-d461-5f83-b98a-bec68bf55beb", 00:15:02.714 "is_configured": true, 00:15:02.714 "data_offset": 2048, 00:15:02.714 "data_size": 63488 00:15:02.714 }, 00:15:02.714 { 00:15:02.714 "name": "BaseBdev4", 00:15:02.714 "uuid": "7989df5b-710f-5fb7-a70e-1251d8fe6621", 00:15:02.714 "is_configured": true, 00:15:02.714 "data_offset": 2048, 00:15:02.714 "data_size": 63488 00:15:02.714 } 00:15:02.714 ] 00:15:02.714 }' 00:15:02.714 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.714 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.306 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:03.306 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.306 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:03.306 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:03.306 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.306 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.306 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.306 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.306 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.306 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.306 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.306 "name": "raid_bdev1", 00:15:03.306 "uuid": "0454ddb2-8998-439d-b0b8-5a60c0e6d550", 00:15:03.306 "strip_size_kb": 0, 00:15:03.306 "state": "online", 00:15:03.306 "raid_level": "raid1", 00:15:03.306 "superblock": true, 00:15:03.306 "num_base_bdevs": 4, 00:15:03.306 "num_base_bdevs_discovered": 3, 00:15:03.306 "num_base_bdevs_operational": 3, 00:15:03.306 "base_bdevs_list": [ 00:15:03.306 { 00:15:03.306 "name": "spare", 00:15:03.306 "uuid": "334bf159-82c6-55e2-9eac-ae748d70dce8", 00:15:03.306 "is_configured": true, 00:15:03.306 "data_offset": 2048, 00:15:03.306 "data_size": 63488 00:15:03.306 }, 00:15:03.306 { 00:15:03.306 "name": null, 00:15:03.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.306 "is_configured": false, 00:15:03.306 "data_offset": 2048, 00:15:03.306 "data_size": 63488 00:15:03.306 }, 00:15:03.306 { 00:15:03.306 "name": "BaseBdev3", 00:15:03.306 "uuid": "4e3c234f-d461-5f83-b98a-bec68bf55beb", 00:15:03.306 "is_configured": true, 00:15:03.306 "data_offset": 2048, 00:15:03.306 "data_size": 63488 00:15:03.306 }, 00:15:03.306 { 00:15:03.306 "name": "BaseBdev4", 00:15:03.306 "uuid": "7989df5b-710f-5fb7-a70e-1251d8fe6621", 00:15:03.306 "is_configured": true, 00:15:03.306 "data_offset": 2048, 00:15:03.306 "data_size": 63488 00:15:03.306 } 00:15:03.306 ] 00:15:03.306 }' 00:15:03.306 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.306 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:03.306 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.306 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:03.306 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.306 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:03.306 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.306 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.306 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.580 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.580 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:03.580 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.580 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.580 [2024-11-15 09:33:51.764227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:03.580 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.580 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:03.580 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.580 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.580 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.580 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.580 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:03.580 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.580 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.580 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.580 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.580 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.580 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.580 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.580 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.580 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.581 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.581 "name": "raid_bdev1", 00:15:03.581 "uuid": "0454ddb2-8998-439d-b0b8-5a60c0e6d550", 00:15:03.581 "strip_size_kb": 0, 00:15:03.581 "state": "online", 00:15:03.581 "raid_level": "raid1", 00:15:03.581 "superblock": true, 00:15:03.581 "num_base_bdevs": 4, 00:15:03.581 "num_base_bdevs_discovered": 2, 00:15:03.581 "num_base_bdevs_operational": 2, 00:15:03.581 "base_bdevs_list": [ 00:15:03.581 { 00:15:03.581 "name": null, 00:15:03.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.581 "is_configured": false, 00:15:03.581 "data_offset": 0, 00:15:03.581 "data_size": 63488 00:15:03.581 }, 00:15:03.581 { 00:15:03.581 "name": null, 00:15:03.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.581 "is_configured": false, 00:15:03.581 "data_offset": 2048, 00:15:03.581 "data_size": 63488 00:15:03.581 }, 00:15:03.581 { 00:15:03.581 "name": "BaseBdev3", 00:15:03.581 "uuid": "4e3c234f-d461-5f83-b98a-bec68bf55beb", 00:15:03.581 "is_configured": true, 00:15:03.581 "data_offset": 2048, 00:15:03.581 "data_size": 63488 00:15:03.581 }, 00:15:03.581 { 00:15:03.581 "name": "BaseBdev4", 00:15:03.581 "uuid": "7989df5b-710f-5fb7-a70e-1251d8fe6621", 00:15:03.581 "is_configured": true, 00:15:03.581 "data_offset": 2048, 00:15:03.581 "data_size": 63488 00:15:03.581 } 00:15:03.581 ] 00:15:03.581 }' 00:15:03.581 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.581 09:33:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.839 09:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:03.839 09:33:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.839 09:33:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.839 [2024-11-15 09:33:52.211591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:03.839 [2024-11-15 09:33:52.211826] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:03.839 [2024-11-15 09:33:52.211865] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:03.839 [2024-11-15 09:33:52.211903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:03.839 [2024-11-15 09:33:52.226596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:15:03.839 09:33:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.839 09:33:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:03.839 [2024-11-15 09:33:52.228542] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:04.777 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.777 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.777 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.777 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.777 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.777 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.777 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.777 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.777 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.035 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.035 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.035 "name": "raid_bdev1", 00:15:05.035 "uuid": "0454ddb2-8998-439d-b0b8-5a60c0e6d550", 00:15:05.035 "strip_size_kb": 0, 00:15:05.035 "state": "online", 00:15:05.035 "raid_level": "raid1", 00:15:05.035 "superblock": true, 00:15:05.035 "num_base_bdevs": 4, 00:15:05.035 "num_base_bdevs_discovered": 3, 00:15:05.035 "num_base_bdevs_operational": 3, 00:15:05.035 "process": { 00:15:05.035 "type": "rebuild", 00:15:05.035 "target": "spare", 00:15:05.035 "progress": { 00:15:05.035 "blocks": 20480, 00:15:05.035 "percent": 32 00:15:05.035 } 00:15:05.035 }, 00:15:05.035 "base_bdevs_list": [ 00:15:05.035 { 00:15:05.035 "name": "spare", 00:15:05.035 "uuid": "334bf159-82c6-55e2-9eac-ae748d70dce8", 00:15:05.035 "is_configured": true, 00:15:05.035 "data_offset": 2048, 00:15:05.035 "data_size": 63488 00:15:05.035 }, 00:15:05.035 { 00:15:05.035 "name": null, 00:15:05.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.035 "is_configured": false, 00:15:05.035 "data_offset": 2048, 00:15:05.035 "data_size": 63488 00:15:05.035 }, 00:15:05.035 { 00:15:05.035 "name": "BaseBdev3", 00:15:05.035 "uuid": "4e3c234f-d461-5f83-b98a-bec68bf55beb", 00:15:05.035 "is_configured": true, 00:15:05.035 "data_offset": 2048, 00:15:05.035 "data_size": 63488 00:15:05.035 }, 00:15:05.035 { 00:15:05.035 "name": "BaseBdev4", 00:15:05.035 "uuid": "7989df5b-710f-5fb7-a70e-1251d8fe6621", 00:15:05.035 "is_configured": true, 00:15:05.035 "data_offset": 2048, 00:15:05.035 "data_size": 63488 00:15:05.035 } 00:15:05.035 ] 00:15:05.035 }' 00:15:05.035 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.035 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.035 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.035 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.035 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:05.035 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.035 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.036 [2024-11-15 09:33:53.396146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:05.036 [2024-11-15 09:33:53.434245] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:05.036 [2024-11-15 09:33:53.434340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.036 [2024-11-15 09:33:53.434357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:05.036 [2024-11-15 09:33:53.434367] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:05.036 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.036 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:05.036 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.036 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.036 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.036 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.036 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:05.036 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.036 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.036 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.036 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.036 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.036 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.036 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.036 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.036 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.294 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.294 "name": "raid_bdev1", 00:15:05.294 "uuid": "0454ddb2-8998-439d-b0b8-5a60c0e6d550", 00:15:05.294 "strip_size_kb": 0, 00:15:05.294 "state": "online", 00:15:05.294 "raid_level": "raid1", 00:15:05.294 "superblock": true, 00:15:05.294 "num_base_bdevs": 4, 00:15:05.294 "num_base_bdevs_discovered": 2, 00:15:05.294 "num_base_bdevs_operational": 2, 00:15:05.294 "base_bdevs_list": [ 00:15:05.294 { 00:15:05.294 "name": null, 00:15:05.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.294 "is_configured": false, 00:15:05.294 "data_offset": 0, 00:15:05.294 "data_size": 63488 00:15:05.294 }, 00:15:05.294 { 00:15:05.294 "name": null, 00:15:05.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.294 "is_configured": false, 00:15:05.294 "data_offset": 2048, 00:15:05.294 "data_size": 63488 00:15:05.294 }, 00:15:05.294 { 00:15:05.294 "name": "BaseBdev3", 00:15:05.294 "uuid": "4e3c234f-d461-5f83-b98a-bec68bf55beb", 00:15:05.294 "is_configured": true, 00:15:05.294 "data_offset": 2048, 00:15:05.294 "data_size": 63488 00:15:05.294 }, 00:15:05.294 { 00:15:05.294 "name": "BaseBdev4", 00:15:05.294 "uuid": "7989df5b-710f-5fb7-a70e-1251d8fe6621", 00:15:05.294 "is_configured": true, 00:15:05.294 "data_offset": 2048, 00:15:05.294 "data_size": 63488 00:15:05.294 } 00:15:05.294 ] 00:15:05.294 }' 00:15:05.295 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.295 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.553 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:05.553 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.553 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.553 [2024-11-15 09:33:53.972139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:05.553 [2024-11-15 09:33:53.972221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.553 [2024-11-15 09:33:53.972250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:05.553 [2024-11-15 09:33:53.972263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.553 [2024-11-15 09:33:53.972744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.553 [2024-11-15 09:33:53.972784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:05.554 [2024-11-15 09:33:53.972895] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:05.554 [2024-11-15 09:33:53.972923] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:05.554 [2024-11-15 09:33:53.972935] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:05.554 [2024-11-15 09:33:53.972960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:05.554 [2024-11-15 09:33:53.987994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:15:05.554 spare 00:15:05.554 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.554 09:33:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:05.554 [2024-11-15 09:33:53.989970] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:06.928 09:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.928 09:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.928 09:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.928 09:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.928 09:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.928 09:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.928 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.928 09:33:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.928 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.928 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.928 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.928 "name": "raid_bdev1", 00:15:06.928 "uuid": "0454ddb2-8998-439d-b0b8-5a60c0e6d550", 00:15:06.928 "strip_size_kb": 0, 00:15:06.928 "state": "online", 00:15:06.928 "raid_level": "raid1", 00:15:06.928 "superblock": true, 00:15:06.928 "num_base_bdevs": 4, 00:15:06.928 "num_base_bdevs_discovered": 3, 00:15:06.928 "num_base_bdevs_operational": 3, 00:15:06.928 "process": { 00:15:06.928 "type": "rebuild", 00:15:06.928 "target": "spare", 00:15:06.928 "progress": { 00:15:06.928 "blocks": 20480, 00:15:06.928 "percent": 32 00:15:06.928 } 00:15:06.928 }, 00:15:06.928 "base_bdevs_list": [ 00:15:06.928 { 00:15:06.928 "name": "spare", 00:15:06.928 "uuid": "334bf159-82c6-55e2-9eac-ae748d70dce8", 00:15:06.928 "is_configured": true, 00:15:06.928 "data_offset": 2048, 00:15:06.928 "data_size": 63488 00:15:06.928 }, 00:15:06.928 { 00:15:06.928 "name": null, 00:15:06.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.928 "is_configured": false, 00:15:06.928 "data_offset": 2048, 00:15:06.928 "data_size": 63488 00:15:06.928 }, 00:15:06.928 { 00:15:06.928 "name": "BaseBdev3", 00:15:06.928 "uuid": "4e3c234f-d461-5f83-b98a-bec68bf55beb", 00:15:06.928 "is_configured": true, 00:15:06.928 "data_offset": 2048, 00:15:06.928 "data_size": 63488 00:15:06.928 }, 00:15:06.928 { 00:15:06.928 "name": "BaseBdev4", 00:15:06.928 "uuid": "7989df5b-710f-5fb7-a70e-1251d8fe6621", 00:15:06.928 "is_configured": true, 00:15:06.928 "data_offset": 2048, 00:15:06.928 "data_size": 63488 00:15:06.928 } 00:15:06.928 ] 00:15:06.928 }' 00:15:06.928 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.928 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.928 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.928 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.928 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:06.928 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.928 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.928 [2024-11-15 09:33:55.130001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:06.929 [2024-11-15 09:33:55.195522] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:06.929 [2024-11-15 09:33:55.195593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.929 [2024-11-15 09:33:55.195611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:06.929 [2024-11-15 09:33:55.195618] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:06.929 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.929 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:06.929 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.929 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.929 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.929 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.929 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:06.929 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.929 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.929 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.929 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.929 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.929 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.929 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.929 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.929 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.929 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.929 "name": "raid_bdev1", 00:15:06.929 "uuid": "0454ddb2-8998-439d-b0b8-5a60c0e6d550", 00:15:06.929 "strip_size_kb": 0, 00:15:06.929 "state": "online", 00:15:06.929 "raid_level": "raid1", 00:15:06.929 "superblock": true, 00:15:06.929 "num_base_bdevs": 4, 00:15:06.929 "num_base_bdevs_discovered": 2, 00:15:06.929 "num_base_bdevs_operational": 2, 00:15:06.929 "base_bdevs_list": [ 00:15:06.929 { 00:15:06.929 "name": null, 00:15:06.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.929 "is_configured": false, 00:15:06.929 "data_offset": 0, 00:15:06.929 "data_size": 63488 00:15:06.929 }, 00:15:06.929 { 00:15:06.929 "name": null, 00:15:06.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.929 "is_configured": false, 00:15:06.929 "data_offset": 2048, 00:15:06.929 "data_size": 63488 00:15:06.929 }, 00:15:06.929 { 00:15:06.929 "name": "BaseBdev3", 00:15:06.929 "uuid": "4e3c234f-d461-5f83-b98a-bec68bf55beb", 00:15:06.929 "is_configured": true, 00:15:06.929 "data_offset": 2048, 00:15:06.929 "data_size": 63488 00:15:06.929 }, 00:15:06.929 { 00:15:06.929 "name": "BaseBdev4", 00:15:06.929 "uuid": "7989df5b-710f-5fb7-a70e-1251d8fe6621", 00:15:06.929 "is_configured": true, 00:15:06.929 "data_offset": 2048, 00:15:06.929 "data_size": 63488 00:15:06.929 } 00:15:06.929 ] 00:15:06.929 }' 00:15:06.929 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.929 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.497 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:07.497 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.497 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:07.497 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:07.497 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.497 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.497 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.497 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.497 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.497 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.497 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.497 "name": "raid_bdev1", 00:15:07.497 "uuid": "0454ddb2-8998-439d-b0b8-5a60c0e6d550", 00:15:07.497 "strip_size_kb": 0, 00:15:07.497 "state": "online", 00:15:07.497 "raid_level": "raid1", 00:15:07.497 "superblock": true, 00:15:07.497 "num_base_bdevs": 4, 00:15:07.497 "num_base_bdevs_discovered": 2, 00:15:07.497 "num_base_bdevs_operational": 2, 00:15:07.497 "base_bdevs_list": [ 00:15:07.497 { 00:15:07.497 "name": null, 00:15:07.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.497 "is_configured": false, 00:15:07.497 "data_offset": 0, 00:15:07.497 "data_size": 63488 00:15:07.497 }, 00:15:07.497 { 00:15:07.497 "name": null, 00:15:07.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.497 "is_configured": false, 00:15:07.497 "data_offset": 2048, 00:15:07.497 "data_size": 63488 00:15:07.497 }, 00:15:07.497 { 00:15:07.497 "name": "BaseBdev3", 00:15:07.497 "uuid": "4e3c234f-d461-5f83-b98a-bec68bf55beb", 00:15:07.497 "is_configured": true, 00:15:07.497 "data_offset": 2048, 00:15:07.497 "data_size": 63488 00:15:07.497 }, 00:15:07.497 { 00:15:07.497 "name": "BaseBdev4", 00:15:07.497 "uuid": "7989df5b-710f-5fb7-a70e-1251d8fe6621", 00:15:07.497 "is_configured": true, 00:15:07.497 "data_offset": 2048, 00:15:07.497 "data_size": 63488 00:15:07.497 } 00:15:07.497 ] 00:15:07.497 }' 00:15:07.497 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.497 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:07.497 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.497 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:07.497 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:07.497 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.497 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.497 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.497 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:07.497 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.497 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.497 [2024-11-15 09:33:55.828547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:07.497 [2024-11-15 09:33:55.828614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.497 [2024-11-15 09:33:55.828637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:15:07.497 [2024-11-15 09:33:55.828647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.497 [2024-11-15 09:33:55.829146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.497 [2024-11-15 09:33:55.829172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:07.497 [2024-11-15 09:33:55.829266] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:07.497 [2024-11-15 09:33:55.829284] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:07.497 [2024-11-15 09:33:55.829298] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:07.497 [2024-11-15 09:33:55.829310] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:07.497 BaseBdev1 00:15:07.497 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.497 09:33:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:08.464 09:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:08.464 09:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.464 09:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.464 09:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.464 09:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.464 09:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:08.464 09:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.464 09:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.464 09:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.464 09:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.464 09:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.464 09:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.464 09:33:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.464 09:33:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.464 09:33:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.464 09:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.464 "name": "raid_bdev1", 00:15:08.464 "uuid": "0454ddb2-8998-439d-b0b8-5a60c0e6d550", 00:15:08.464 "strip_size_kb": 0, 00:15:08.464 "state": "online", 00:15:08.464 "raid_level": "raid1", 00:15:08.464 "superblock": true, 00:15:08.464 "num_base_bdevs": 4, 00:15:08.464 "num_base_bdevs_discovered": 2, 00:15:08.464 "num_base_bdevs_operational": 2, 00:15:08.464 "base_bdevs_list": [ 00:15:08.464 { 00:15:08.464 "name": null, 00:15:08.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.464 "is_configured": false, 00:15:08.464 "data_offset": 0, 00:15:08.464 "data_size": 63488 00:15:08.464 }, 00:15:08.464 { 00:15:08.464 "name": null, 00:15:08.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.464 "is_configured": false, 00:15:08.464 "data_offset": 2048, 00:15:08.464 "data_size": 63488 00:15:08.464 }, 00:15:08.464 { 00:15:08.464 "name": "BaseBdev3", 00:15:08.464 "uuid": "4e3c234f-d461-5f83-b98a-bec68bf55beb", 00:15:08.464 "is_configured": true, 00:15:08.464 "data_offset": 2048, 00:15:08.464 "data_size": 63488 00:15:08.464 }, 00:15:08.464 { 00:15:08.464 "name": "BaseBdev4", 00:15:08.464 "uuid": "7989df5b-710f-5fb7-a70e-1251d8fe6621", 00:15:08.464 "is_configured": true, 00:15:08.464 "data_offset": 2048, 00:15:08.464 "data_size": 63488 00:15:08.464 } 00:15:08.464 ] 00:15:08.464 }' 00:15:08.464 09:33:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.464 09:33:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.032 "name": "raid_bdev1", 00:15:09.032 "uuid": "0454ddb2-8998-439d-b0b8-5a60c0e6d550", 00:15:09.032 "strip_size_kb": 0, 00:15:09.032 "state": "online", 00:15:09.032 "raid_level": "raid1", 00:15:09.032 "superblock": true, 00:15:09.032 "num_base_bdevs": 4, 00:15:09.032 "num_base_bdevs_discovered": 2, 00:15:09.032 "num_base_bdevs_operational": 2, 00:15:09.032 "base_bdevs_list": [ 00:15:09.032 { 00:15:09.032 "name": null, 00:15:09.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.032 "is_configured": false, 00:15:09.032 "data_offset": 0, 00:15:09.032 "data_size": 63488 00:15:09.032 }, 00:15:09.032 { 00:15:09.032 "name": null, 00:15:09.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.032 "is_configured": false, 00:15:09.032 "data_offset": 2048, 00:15:09.032 "data_size": 63488 00:15:09.032 }, 00:15:09.032 { 00:15:09.032 "name": "BaseBdev3", 00:15:09.032 "uuid": "4e3c234f-d461-5f83-b98a-bec68bf55beb", 00:15:09.032 "is_configured": true, 00:15:09.032 "data_offset": 2048, 00:15:09.032 "data_size": 63488 00:15:09.032 }, 00:15:09.032 { 00:15:09.032 "name": "BaseBdev4", 00:15:09.032 "uuid": "7989df5b-710f-5fb7-a70e-1251d8fe6621", 00:15:09.032 "is_configured": true, 00:15:09.032 "data_offset": 2048, 00:15:09.032 "data_size": 63488 00:15:09.032 } 00:15:09.032 ] 00:15:09.032 }' 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:09.032 [2024-11-15 09:33:57.470067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:09.032 [2024-11-15 09:33:57.470243] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:09.032 [2024-11-15 09:33:57.470267] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:09.032 request: 00:15:09.032 { 00:15:09.032 "base_bdev": "BaseBdev1", 00:15:09.032 "raid_bdev": "raid_bdev1", 00:15:09.032 "method": "bdev_raid_add_base_bdev", 00:15:09.032 "req_id": 1 00:15:09.032 } 00:15:09.032 Got JSON-RPC error response 00:15:09.032 response: 00:15:09.032 { 00:15:09.032 "code": -22, 00:15:09.032 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:09.032 } 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:09.032 09:33:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:10.412 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:10.412 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.412 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.412 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.412 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.412 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:10.412 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.412 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.412 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.412 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.412 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.412 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.412 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.412 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.412 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.412 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.412 "name": "raid_bdev1", 00:15:10.412 "uuid": "0454ddb2-8998-439d-b0b8-5a60c0e6d550", 00:15:10.412 "strip_size_kb": 0, 00:15:10.412 "state": "online", 00:15:10.412 "raid_level": "raid1", 00:15:10.412 "superblock": true, 00:15:10.412 "num_base_bdevs": 4, 00:15:10.412 "num_base_bdevs_discovered": 2, 00:15:10.412 "num_base_bdevs_operational": 2, 00:15:10.412 "base_bdevs_list": [ 00:15:10.412 { 00:15:10.412 "name": null, 00:15:10.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.412 "is_configured": false, 00:15:10.412 "data_offset": 0, 00:15:10.412 "data_size": 63488 00:15:10.412 }, 00:15:10.412 { 00:15:10.412 "name": null, 00:15:10.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.412 "is_configured": false, 00:15:10.412 "data_offset": 2048, 00:15:10.412 "data_size": 63488 00:15:10.412 }, 00:15:10.412 { 00:15:10.412 "name": "BaseBdev3", 00:15:10.412 "uuid": "4e3c234f-d461-5f83-b98a-bec68bf55beb", 00:15:10.412 "is_configured": true, 00:15:10.412 "data_offset": 2048, 00:15:10.412 "data_size": 63488 00:15:10.412 }, 00:15:10.412 { 00:15:10.412 "name": "BaseBdev4", 00:15:10.412 "uuid": "7989df5b-710f-5fb7-a70e-1251d8fe6621", 00:15:10.412 "is_configured": true, 00:15:10.412 "data_offset": 2048, 00:15:10.412 "data_size": 63488 00:15:10.412 } 00:15:10.412 ] 00:15:10.412 }' 00:15:10.412 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.412 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.670 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:10.671 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.671 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:10.671 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:10.671 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.671 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.671 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.671 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.671 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.671 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.671 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.671 "name": "raid_bdev1", 00:15:10.671 "uuid": "0454ddb2-8998-439d-b0b8-5a60c0e6d550", 00:15:10.671 "strip_size_kb": 0, 00:15:10.671 "state": "online", 00:15:10.671 "raid_level": "raid1", 00:15:10.671 "superblock": true, 00:15:10.671 "num_base_bdevs": 4, 00:15:10.671 "num_base_bdevs_discovered": 2, 00:15:10.671 "num_base_bdevs_operational": 2, 00:15:10.671 "base_bdevs_list": [ 00:15:10.671 { 00:15:10.671 "name": null, 00:15:10.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.671 "is_configured": false, 00:15:10.671 "data_offset": 0, 00:15:10.671 "data_size": 63488 00:15:10.671 }, 00:15:10.671 { 00:15:10.671 "name": null, 00:15:10.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.671 "is_configured": false, 00:15:10.671 "data_offset": 2048, 00:15:10.671 "data_size": 63488 00:15:10.671 }, 00:15:10.671 { 00:15:10.671 "name": "BaseBdev3", 00:15:10.671 "uuid": "4e3c234f-d461-5f83-b98a-bec68bf55beb", 00:15:10.671 "is_configured": true, 00:15:10.671 "data_offset": 2048, 00:15:10.671 "data_size": 63488 00:15:10.671 }, 00:15:10.671 { 00:15:10.671 "name": "BaseBdev4", 00:15:10.671 "uuid": "7989df5b-710f-5fb7-a70e-1251d8fe6621", 00:15:10.671 "is_configured": true, 00:15:10.671 "data_offset": 2048, 00:15:10.671 "data_size": 63488 00:15:10.671 } 00:15:10.671 ] 00:15:10.671 }' 00:15:10.671 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.671 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:10.671 09:33:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.671 09:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:10.671 09:33:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79570 00:15:10.671 09:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 79570 ']' 00:15:10.671 09:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 79570 00:15:10.671 09:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:15:10.671 09:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:10.671 09:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79570 00:15:10.671 killing process with pid 79570 00:15:10.671 Received shutdown signal, test time was about 18.101138 seconds 00:15:10.671 00:15:10.671 Latency(us) 00:15:10.671 [2024-11-15T09:33:59.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.671 [2024-11-15T09:33:59.134Z] =================================================================================================================== 00:15:10.671 [2024-11-15T09:33:59.134Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:10.671 09:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:10.671 09:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:10.671 09:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79570' 00:15:10.671 09:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 79570 00:15:10.671 [2024-11-15 09:33:59.091714] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:10.671 [2024-11-15 09:33:59.091864] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:10.671 09:33:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 79570 00:15:10.671 [2024-11-15 09:33:59.091959] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:10.671 [2024-11-15 09:33:59.091973] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:11.238 [2024-11-15 09:33:59.540976] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:12.619 ************************************ 00:15:12.619 END TEST raid_rebuild_test_sb_io 00:15:12.619 ************************************ 00:15:12.619 09:34:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:12.619 00:15:12.619 real 0m21.772s 00:15:12.619 user 0m28.390s 00:15:12.619 sys 0m2.832s 00:15:12.619 09:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:12.619 09:34:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.619 09:34:00 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:12.619 09:34:00 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:15:12.619 09:34:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:12.619 09:34:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:12.619 09:34:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:12.619 ************************************ 00:15:12.619 START TEST raid5f_state_function_test 00:15:12.619 ************************************ 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 false 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80301 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:12.619 Process raid pid: 80301 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80301' 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80301 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 80301 ']' 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:12.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:12.619 09:34:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.619 [2024-11-15 09:34:00.988437] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:15:12.619 [2024-11-15 09:34:00.988591] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.901 [2024-11-15 09:34:01.172071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.901 [2024-11-15 09:34:01.295654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.173 [2024-11-15 09:34:01.522314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.173 [2024-11-15 09:34:01.522365] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.437 09:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:13.437 09:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:15:13.437 09:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:13.437 09:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.437 09:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.437 [2024-11-15 09:34:01.854943] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:13.437 [2024-11-15 09:34:01.855008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:13.437 [2024-11-15 09:34:01.855018] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:13.437 [2024-11-15 09:34:01.855028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:13.437 [2024-11-15 09:34:01.855040] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:13.437 [2024-11-15 09:34:01.855049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:13.437 09:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.437 09:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:13.437 09:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.437 09:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.437 09:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.437 09:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.437 09:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.437 09:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.437 09:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.437 09:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.438 09:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.438 09:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.438 09:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.438 09:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.438 09:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.438 09:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.696 09:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.696 "name": "Existed_Raid", 00:15:13.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.696 "strip_size_kb": 64, 00:15:13.696 "state": "configuring", 00:15:13.696 "raid_level": "raid5f", 00:15:13.696 "superblock": false, 00:15:13.696 "num_base_bdevs": 3, 00:15:13.696 "num_base_bdevs_discovered": 0, 00:15:13.696 "num_base_bdevs_operational": 3, 00:15:13.696 "base_bdevs_list": [ 00:15:13.696 { 00:15:13.696 "name": "BaseBdev1", 00:15:13.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.696 "is_configured": false, 00:15:13.696 "data_offset": 0, 00:15:13.696 "data_size": 0 00:15:13.696 }, 00:15:13.696 { 00:15:13.696 "name": "BaseBdev2", 00:15:13.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.696 "is_configured": false, 00:15:13.696 "data_offset": 0, 00:15:13.696 "data_size": 0 00:15:13.696 }, 00:15:13.696 { 00:15:13.696 "name": "BaseBdev3", 00:15:13.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.696 "is_configured": false, 00:15:13.696 "data_offset": 0, 00:15:13.696 "data_size": 0 00:15:13.696 } 00:15:13.696 ] 00:15:13.696 }' 00:15:13.696 09:34:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.696 09:34:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.955 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:13.955 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.955 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.955 [2024-11-15 09:34:02.362049] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:13.955 [2024-11-15 09:34:02.362106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:13.955 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.955 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:13.955 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.955 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.955 [2024-11-15 09:34:02.374027] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:13.955 [2024-11-15 09:34:02.374087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:13.955 [2024-11-15 09:34:02.374099] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:13.955 [2024-11-15 09:34:02.374110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:13.955 [2024-11-15 09:34:02.374118] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:13.955 [2024-11-15 09:34:02.374130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:13.955 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.955 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:13.955 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.955 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.215 [2024-11-15 09:34:02.427099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:14.215 BaseBdev1 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.215 [ 00:15:14.215 { 00:15:14.215 "name": "BaseBdev1", 00:15:14.215 "aliases": [ 00:15:14.215 "a5cf803d-6e56-45d6-afea-e8e6b08a6b1a" 00:15:14.215 ], 00:15:14.215 "product_name": "Malloc disk", 00:15:14.215 "block_size": 512, 00:15:14.215 "num_blocks": 65536, 00:15:14.215 "uuid": "a5cf803d-6e56-45d6-afea-e8e6b08a6b1a", 00:15:14.215 "assigned_rate_limits": { 00:15:14.215 "rw_ios_per_sec": 0, 00:15:14.215 "rw_mbytes_per_sec": 0, 00:15:14.215 "r_mbytes_per_sec": 0, 00:15:14.215 "w_mbytes_per_sec": 0 00:15:14.215 }, 00:15:14.215 "claimed": true, 00:15:14.215 "claim_type": "exclusive_write", 00:15:14.215 "zoned": false, 00:15:14.215 "supported_io_types": { 00:15:14.215 "read": true, 00:15:14.215 "write": true, 00:15:14.215 "unmap": true, 00:15:14.215 "flush": true, 00:15:14.215 "reset": true, 00:15:14.215 "nvme_admin": false, 00:15:14.215 "nvme_io": false, 00:15:14.215 "nvme_io_md": false, 00:15:14.215 "write_zeroes": true, 00:15:14.215 "zcopy": true, 00:15:14.215 "get_zone_info": false, 00:15:14.215 "zone_management": false, 00:15:14.215 "zone_append": false, 00:15:14.215 "compare": false, 00:15:14.215 "compare_and_write": false, 00:15:14.215 "abort": true, 00:15:14.215 "seek_hole": false, 00:15:14.215 "seek_data": false, 00:15:14.215 "copy": true, 00:15:14.215 "nvme_iov_md": false 00:15:14.215 }, 00:15:14.215 "memory_domains": [ 00:15:14.215 { 00:15:14.215 "dma_device_id": "system", 00:15:14.215 "dma_device_type": 1 00:15:14.215 }, 00:15:14.215 { 00:15:14.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.215 "dma_device_type": 2 00:15:14.215 } 00:15:14.215 ], 00:15:14.215 "driver_specific": {} 00:15:14.215 } 00:15:14.215 ] 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.215 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.215 "name": "Existed_Raid", 00:15:14.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.215 "strip_size_kb": 64, 00:15:14.215 "state": "configuring", 00:15:14.215 "raid_level": "raid5f", 00:15:14.215 "superblock": false, 00:15:14.215 "num_base_bdevs": 3, 00:15:14.215 "num_base_bdevs_discovered": 1, 00:15:14.215 "num_base_bdevs_operational": 3, 00:15:14.215 "base_bdevs_list": [ 00:15:14.215 { 00:15:14.215 "name": "BaseBdev1", 00:15:14.215 "uuid": "a5cf803d-6e56-45d6-afea-e8e6b08a6b1a", 00:15:14.215 "is_configured": true, 00:15:14.215 "data_offset": 0, 00:15:14.215 "data_size": 65536 00:15:14.215 }, 00:15:14.215 { 00:15:14.215 "name": "BaseBdev2", 00:15:14.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.216 "is_configured": false, 00:15:14.216 "data_offset": 0, 00:15:14.216 "data_size": 0 00:15:14.216 }, 00:15:14.216 { 00:15:14.216 "name": "BaseBdev3", 00:15:14.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.216 "is_configured": false, 00:15:14.216 "data_offset": 0, 00:15:14.216 "data_size": 0 00:15:14.216 } 00:15:14.216 ] 00:15:14.216 }' 00:15:14.216 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.216 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.476 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:14.476 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.476 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.476 [2024-11-15 09:34:02.902376] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:14.476 [2024-11-15 09:34:02.902453] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:14.476 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.476 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:14.476 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.476 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.476 [2024-11-15 09:34:02.914403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:14.476 [2024-11-15 09:34:02.916517] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:14.476 [2024-11-15 09:34:02.916572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:14.476 [2024-11-15 09:34:02.916585] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:14.476 [2024-11-15 09:34:02.916597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:14.476 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.476 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:14.476 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:14.476 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:14.476 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.476 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.476 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.476 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.476 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.476 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.476 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.476 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.476 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.476 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.476 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.476 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.476 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.735 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.735 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.735 "name": "Existed_Raid", 00:15:14.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.735 "strip_size_kb": 64, 00:15:14.735 "state": "configuring", 00:15:14.735 "raid_level": "raid5f", 00:15:14.735 "superblock": false, 00:15:14.735 "num_base_bdevs": 3, 00:15:14.735 "num_base_bdevs_discovered": 1, 00:15:14.735 "num_base_bdevs_operational": 3, 00:15:14.735 "base_bdevs_list": [ 00:15:14.735 { 00:15:14.735 "name": "BaseBdev1", 00:15:14.735 "uuid": "a5cf803d-6e56-45d6-afea-e8e6b08a6b1a", 00:15:14.735 "is_configured": true, 00:15:14.735 "data_offset": 0, 00:15:14.735 "data_size": 65536 00:15:14.735 }, 00:15:14.735 { 00:15:14.735 "name": "BaseBdev2", 00:15:14.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.735 "is_configured": false, 00:15:14.735 "data_offset": 0, 00:15:14.735 "data_size": 0 00:15:14.735 }, 00:15:14.735 { 00:15:14.735 "name": "BaseBdev3", 00:15:14.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.735 "is_configured": false, 00:15:14.735 "data_offset": 0, 00:15:14.735 "data_size": 0 00:15:14.735 } 00:15:14.735 ] 00:15:14.735 }' 00:15:14.735 09:34:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.735 09:34:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.993 [2024-11-15 09:34:03.411572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:14.993 BaseBdev2 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.993 [ 00:15:14.993 { 00:15:14.993 "name": "BaseBdev2", 00:15:14.993 "aliases": [ 00:15:14.993 "d4f1153b-af7b-4c8f-a680-41cb646d4541" 00:15:14.993 ], 00:15:14.993 "product_name": "Malloc disk", 00:15:14.993 "block_size": 512, 00:15:14.993 "num_blocks": 65536, 00:15:14.993 "uuid": "d4f1153b-af7b-4c8f-a680-41cb646d4541", 00:15:14.993 "assigned_rate_limits": { 00:15:14.993 "rw_ios_per_sec": 0, 00:15:14.993 "rw_mbytes_per_sec": 0, 00:15:14.993 "r_mbytes_per_sec": 0, 00:15:14.993 "w_mbytes_per_sec": 0 00:15:14.993 }, 00:15:14.993 "claimed": true, 00:15:14.993 "claim_type": "exclusive_write", 00:15:14.993 "zoned": false, 00:15:14.993 "supported_io_types": { 00:15:14.993 "read": true, 00:15:14.993 "write": true, 00:15:14.993 "unmap": true, 00:15:14.993 "flush": true, 00:15:14.993 "reset": true, 00:15:14.993 "nvme_admin": false, 00:15:14.993 "nvme_io": false, 00:15:14.993 "nvme_io_md": false, 00:15:14.993 "write_zeroes": true, 00:15:14.993 "zcopy": true, 00:15:14.993 "get_zone_info": false, 00:15:14.993 "zone_management": false, 00:15:14.993 "zone_append": false, 00:15:14.993 "compare": false, 00:15:14.993 "compare_and_write": false, 00:15:14.993 "abort": true, 00:15:14.993 "seek_hole": false, 00:15:14.993 "seek_data": false, 00:15:14.993 "copy": true, 00:15:14.993 "nvme_iov_md": false 00:15:14.993 }, 00:15:14.993 "memory_domains": [ 00:15:14.993 { 00:15:14.993 "dma_device_id": "system", 00:15:14.993 "dma_device_type": 1 00:15:14.993 }, 00:15:14.993 { 00:15:14.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.993 "dma_device_type": 2 00:15:14.993 } 00:15:14.993 ], 00:15:14.993 "driver_specific": {} 00:15:14.993 } 00:15:14.993 ] 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.993 09:34:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.251 09:34:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.251 09:34:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.251 09:34:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.251 09:34:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.251 09:34:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.251 09:34:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.251 09:34:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.251 09:34:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.251 09:34:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.251 09:34:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.251 "name": "Existed_Raid", 00:15:15.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.251 "strip_size_kb": 64, 00:15:15.251 "state": "configuring", 00:15:15.251 "raid_level": "raid5f", 00:15:15.251 "superblock": false, 00:15:15.251 "num_base_bdevs": 3, 00:15:15.251 "num_base_bdevs_discovered": 2, 00:15:15.251 "num_base_bdevs_operational": 3, 00:15:15.251 "base_bdevs_list": [ 00:15:15.251 { 00:15:15.251 "name": "BaseBdev1", 00:15:15.252 "uuid": "a5cf803d-6e56-45d6-afea-e8e6b08a6b1a", 00:15:15.252 "is_configured": true, 00:15:15.252 "data_offset": 0, 00:15:15.252 "data_size": 65536 00:15:15.252 }, 00:15:15.252 { 00:15:15.252 "name": "BaseBdev2", 00:15:15.252 "uuid": "d4f1153b-af7b-4c8f-a680-41cb646d4541", 00:15:15.252 "is_configured": true, 00:15:15.252 "data_offset": 0, 00:15:15.252 "data_size": 65536 00:15:15.252 }, 00:15:15.252 { 00:15:15.252 "name": "BaseBdev3", 00:15:15.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.252 "is_configured": false, 00:15:15.252 "data_offset": 0, 00:15:15.252 "data_size": 0 00:15:15.252 } 00:15:15.252 ] 00:15:15.252 }' 00:15:15.252 09:34:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.252 09:34:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.511 09:34:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:15.511 09:34:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.511 09:34:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.769 [2024-11-15 09:34:03.997283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:15.769 [2024-11-15 09:34:03.997376] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:15.769 [2024-11-15 09:34:03.997392] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:15.769 [2024-11-15 09:34:03.997669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:15.769 [2024-11-15 09:34:04.003102] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:15.769 [2024-11-15 09:34:04.003125] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:15.769 [2024-11-15 09:34:04.003389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.769 BaseBdev3 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.769 [ 00:15:15.769 { 00:15:15.769 "name": "BaseBdev3", 00:15:15.769 "aliases": [ 00:15:15.769 "fb1a3738-f614-4973-9e02-b61aee2ce3d7" 00:15:15.769 ], 00:15:15.769 "product_name": "Malloc disk", 00:15:15.769 "block_size": 512, 00:15:15.769 "num_blocks": 65536, 00:15:15.769 "uuid": "fb1a3738-f614-4973-9e02-b61aee2ce3d7", 00:15:15.769 "assigned_rate_limits": { 00:15:15.769 "rw_ios_per_sec": 0, 00:15:15.769 "rw_mbytes_per_sec": 0, 00:15:15.769 "r_mbytes_per_sec": 0, 00:15:15.769 "w_mbytes_per_sec": 0 00:15:15.769 }, 00:15:15.769 "claimed": true, 00:15:15.769 "claim_type": "exclusive_write", 00:15:15.769 "zoned": false, 00:15:15.769 "supported_io_types": { 00:15:15.769 "read": true, 00:15:15.769 "write": true, 00:15:15.769 "unmap": true, 00:15:15.769 "flush": true, 00:15:15.769 "reset": true, 00:15:15.769 "nvme_admin": false, 00:15:15.769 "nvme_io": false, 00:15:15.769 "nvme_io_md": false, 00:15:15.769 "write_zeroes": true, 00:15:15.769 "zcopy": true, 00:15:15.769 "get_zone_info": false, 00:15:15.769 "zone_management": false, 00:15:15.769 "zone_append": false, 00:15:15.769 "compare": false, 00:15:15.769 "compare_and_write": false, 00:15:15.769 "abort": true, 00:15:15.769 "seek_hole": false, 00:15:15.769 "seek_data": false, 00:15:15.769 "copy": true, 00:15:15.769 "nvme_iov_md": false 00:15:15.769 }, 00:15:15.769 "memory_domains": [ 00:15:15.769 { 00:15:15.769 "dma_device_id": "system", 00:15:15.769 "dma_device_type": 1 00:15:15.769 }, 00:15:15.769 { 00:15:15.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.769 "dma_device_type": 2 00:15:15.769 } 00:15:15.769 ], 00:15:15.769 "driver_specific": {} 00:15:15.769 } 00:15:15.769 ] 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.769 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.769 "name": "Existed_Raid", 00:15:15.769 "uuid": "89ea7a68-4ea9-4070-8293-f2a9015da316", 00:15:15.769 "strip_size_kb": 64, 00:15:15.769 "state": "online", 00:15:15.769 "raid_level": "raid5f", 00:15:15.769 "superblock": false, 00:15:15.769 "num_base_bdevs": 3, 00:15:15.769 "num_base_bdevs_discovered": 3, 00:15:15.769 "num_base_bdevs_operational": 3, 00:15:15.769 "base_bdevs_list": [ 00:15:15.769 { 00:15:15.769 "name": "BaseBdev1", 00:15:15.769 "uuid": "a5cf803d-6e56-45d6-afea-e8e6b08a6b1a", 00:15:15.769 "is_configured": true, 00:15:15.769 "data_offset": 0, 00:15:15.769 "data_size": 65536 00:15:15.769 }, 00:15:15.769 { 00:15:15.769 "name": "BaseBdev2", 00:15:15.769 "uuid": "d4f1153b-af7b-4c8f-a680-41cb646d4541", 00:15:15.769 "is_configured": true, 00:15:15.769 "data_offset": 0, 00:15:15.769 "data_size": 65536 00:15:15.769 }, 00:15:15.769 { 00:15:15.769 "name": "BaseBdev3", 00:15:15.769 "uuid": "fb1a3738-f614-4973-9e02-b61aee2ce3d7", 00:15:15.769 "is_configured": true, 00:15:15.769 "data_offset": 0, 00:15:15.769 "data_size": 65536 00:15:15.769 } 00:15:15.769 ] 00:15:15.769 }' 00:15:15.770 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.770 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.027 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:16.027 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:16.027 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:16.027 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:16.027 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:16.027 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:16.027 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:16.027 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.027 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.027 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:16.027 [2024-11-15 09:34:04.485028] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:16.285 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.285 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:16.285 "name": "Existed_Raid", 00:15:16.285 "aliases": [ 00:15:16.285 "89ea7a68-4ea9-4070-8293-f2a9015da316" 00:15:16.285 ], 00:15:16.285 "product_name": "Raid Volume", 00:15:16.285 "block_size": 512, 00:15:16.286 "num_blocks": 131072, 00:15:16.286 "uuid": "89ea7a68-4ea9-4070-8293-f2a9015da316", 00:15:16.286 "assigned_rate_limits": { 00:15:16.286 "rw_ios_per_sec": 0, 00:15:16.286 "rw_mbytes_per_sec": 0, 00:15:16.286 "r_mbytes_per_sec": 0, 00:15:16.286 "w_mbytes_per_sec": 0 00:15:16.286 }, 00:15:16.286 "claimed": false, 00:15:16.286 "zoned": false, 00:15:16.286 "supported_io_types": { 00:15:16.286 "read": true, 00:15:16.286 "write": true, 00:15:16.286 "unmap": false, 00:15:16.286 "flush": false, 00:15:16.286 "reset": true, 00:15:16.286 "nvme_admin": false, 00:15:16.286 "nvme_io": false, 00:15:16.286 "nvme_io_md": false, 00:15:16.286 "write_zeroes": true, 00:15:16.286 "zcopy": false, 00:15:16.286 "get_zone_info": false, 00:15:16.286 "zone_management": false, 00:15:16.286 "zone_append": false, 00:15:16.286 "compare": false, 00:15:16.286 "compare_and_write": false, 00:15:16.286 "abort": false, 00:15:16.286 "seek_hole": false, 00:15:16.286 "seek_data": false, 00:15:16.286 "copy": false, 00:15:16.286 "nvme_iov_md": false 00:15:16.286 }, 00:15:16.286 "driver_specific": { 00:15:16.286 "raid": { 00:15:16.286 "uuid": "89ea7a68-4ea9-4070-8293-f2a9015da316", 00:15:16.286 "strip_size_kb": 64, 00:15:16.286 "state": "online", 00:15:16.286 "raid_level": "raid5f", 00:15:16.286 "superblock": false, 00:15:16.286 "num_base_bdevs": 3, 00:15:16.286 "num_base_bdevs_discovered": 3, 00:15:16.286 "num_base_bdevs_operational": 3, 00:15:16.286 "base_bdevs_list": [ 00:15:16.286 { 00:15:16.286 "name": "BaseBdev1", 00:15:16.286 "uuid": "a5cf803d-6e56-45d6-afea-e8e6b08a6b1a", 00:15:16.286 "is_configured": true, 00:15:16.286 "data_offset": 0, 00:15:16.286 "data_size": 65536 00:15:16.286 }, 00:15:16.286 { 00:15:16.286 "name": "BaseBdev2", 00:15:16.286 "uuid": "d4f1153b-af7b-4c8f-a680-41cb646d4541", 00:15:16.286 "is_configured": true, 00:15:16.286 "data_offset": 0, 00:15:16.286 "data_size": 65536 00:15:16.286 }, 00:15:16.286 { 00:15:16.286 "name": "BaseBdev3", 00:15:16.286 "uuid": "fb1a3738-f614-4973-9e02-b61aee2ce3d7", 00:15:16.286 "is_configured": true, 00:15:16.286 "data_offset": 0, 00:15:16.286 "data_size": 65536 00:15:16.286 } 00:15:16.286 ] 00:15:16.286 } 00:15:16.286 } 00:15:16.286 }' 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:16.286 BaseBdev2 00:15:16.286 BaseBdev3' 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.286 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.545 [2024-11-15 09:34:04.784341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.545 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.545 "name": "Existed_Raid", 00:15:16.545 "uuid": "89ea7a68-4ea9-4070-8293-f2a9015da316", 00:15:16.545 "strip_size_kb": 64, 00:15:16.545 "state": "online", 00:15:16.545 "raid_level": "raid5f", 00:15:16.545 "superblock": false, 00:15:16.545 "num_base_bdevs": 3, 00:15:16.545 "num_base_bdevs_discovered": 2, 00:15:16.545 "num_base_bdevs_operational": 2, 00:15:16.545 "base_bdevs_list": [ 00:15:16.545 { 00:15:16.545 "name": null, 00:15:16.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.545 "is_configured": false, 00:15:16.545 "data_offset": 0, 00:15:16.545 "data_size": 65536 00:15:16.545 }, 00:15:16.545 { 00:15:16.545 "name": "BaseBdev2", 00:15:16.546 "uuid": "d4f1153b-af7b-4c8f-a680-41cb646d4541", 00:15:16.546 "is_configured": true, 00:15:16.546 "data_offset": 0, 00:15:16.546 "data_size": 65536 00:15:16.546 }, 00:15:16.546 { 00:15:16.546 "name": "BaseBdev3", 00:15:16.546 "uuid": "fb1a3738-f614-4973-9e02-b61aee2ce3d7", 00:15:16.546 "is_configured": true, 00:15:16.546 "data_offset": 0, 00:15:16.546 "data_size": 65536 00:15:16.546 } 00:15:16.546 ] 00:15:16.546 }' 00:15:16.546 09:34:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.546 09:34:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.112 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:17.112 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:17.112 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.112 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.112 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.112 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:17.112 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.112 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:17.112 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:17.112 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:17.112 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.112 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.112 [2024-11-15 09:34:05.359190] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:17.112 [2024-11-15 09:34:05.359301] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:17.112 [2024-11-15 09:34:05.454329] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.112 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.112 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:17.112 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:17.112 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.112 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:17.112 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.112 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.112 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.112 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:17.112 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:17.112 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:17.112 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.112 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.112 [2024-11-15 09:34:05.514255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:17.112 [2024-11-15 09:34:05.514315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.371 BaseBdev2 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.371 [ 00:15:17.371 { 00:15:17.371 "name": "BaseBdev2", 00:15:17.371 "aliases": [ 00:15:17.371 "e1148d2d-beec-4067-84f6-00672d80c6be" 00:15:17.371 ], 00:15:17.371 "product_name": "Malloc disk", 00:15:17.371 "block_size": 512, 00:15:17.371 "num_blocks": 65536, 00:15:17.371 "uuid": "e1148d2d-beec-4067-84f6-00672d80c6be", 00:15:17.371 "assigned_rate_limits": { 00:15:17.371 "rw_ios_per_sec": 0, 00:15:17.371 "rw_mbytes_per_sec": 0, 00:15:17.371 "r_mbytes_per_sec": 0, 00:15:17.371 "w_mbytes_per_sec": 0 00:15:17.371 }, 00:15:17.371 "claimed": false, 00:15:17.371 "zoned": false, 00:15:17.371 "supported_io_types": { 00:15:17.371 "read": true, 00:15:17.371 "write": true, 00:15:17.371 "unmap": true, 00:15:17.371 "flush": true, 00:15:17.371 "reset": true, 00:15:17.371 "nvme_admin": false, 00:15:17.371 "nvme_io": false, 00:15:17.371 "nvme_io_md": false, 00:15:17.371 "write_zeroes": true, 00:15:17.371 "zcopy": true, 00:15:17.371 "get_zone_info": false, 00:15:17.371 "zone_management": false, 00:15:17.371 "zone_append": false, 00:15:17.371 "compare": false, 00:15:17.371 "compare_and_write": false, 00:15:17.371 "abort": true, 00:15:17.371 "seek_hole": false, 00:15:17.371 "seek_data": false, 00:15:17.371 "copy": true, 00:15:17.371 "nvme_iov_md": false 00:15:17.371 }, 00:15:17.371 "memory_domains": [ 00:15:17.371 { 00:15:17.371 "dma_device_id": "system", 00:15:17.371 "dma_device_type": 1 00:15:17.371 }, 00:15:17.371 { 00:15:17.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.371 "dma_device_type": 2 00:15:17.371 } 00:15:17.371 ], 00:15:17.371 "driver_specific": {} 00:15:17.371 } 00:15:17.371 ] 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.371 BaseBdev3 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.371 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.371 [ 00:15:17.371 { 00:15:17.371 "name": "BaseBdev3", 00:15:17.371 "aliases": [ 00:15:17.371 "ab2500b6-f4ac-420c-8aac-4304f38d5766" 00:15:17.371 ], 00:15:17.371 "product_name": "Malloc disk", 00:15:17.371 "block_size": 512, 00:15:17.371 "num_blocks": 65536, 00:15:17.371 "uuid": "ab2500b6-f4ac-420c-8aac-4304f38d5766", 00:15:17.371 "assigned_rate_limits": { 00:15:17.371 "rw_ios_per_sec": 0, 00:15:17.371 "rw_mbytes_per_sec": 0, 00:15:17.371 "r_mbytes_per_sec": 0, 00:15:17.371 "w_mbytes_per_sec": 0 00:15:17.371 }, 00:15:17.371 "claimed": false, 00:15:17.371 "zoned": false, 00:15:17.371 "supported_io_types": { 00:15:17.371 "read": true, 00:15:17.371 "write": true, 00:15:17.371 "unmap": true, 00:15:17.371 "flush": true, 00:15:17.371 "reset": true, 00:15:17.371 "nvme_admin": false, 00:15:17.371 "nvme_io": false, 00:15:17.371 "nvme_io_md": false, 00:15:17.371 "write_zeroes": true, 00:15:17.371 "zcopy": true, 00:15:17.371 "get_zone_info": false, 00:15:17.371 "zone_management": false, 00:15:17.371 "zone_append": false, 00:15:17.371 "compare": false, 00:15:17.371 "compare_and_write": false, 00:15:17.371 "abort": true, 00:15:17.371 "seek_hole": false, 00:15:17.372 "seek_data": false, 00:15:17.372 "copy": true, 00:15:17.372 "nvme_iov_md": false 00:15:17.372 }, 00:15:17.372 "memory_domains": [ 00:15:17.372 { 00:15:17.372 "dma_device_id": "system", 00:15:17.372 "dma_device_type": 1 00:15:17.372 }, 00:15:17.372 { 00:15:17.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.372 "dma_device_type": 2 00:15:17.372 } 00:15:17.372 ], 00:15:17.372 "driver_specific": {} 00:15:17.372 } 00:15:17.372 ] 00:15:17.372 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.372 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:17.372 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:17.372 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:17.372 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:17.372 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.372 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.372 [2024-11-15 09:34:05.822093] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:17.372 [2024-11-15 09:34:05.822146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:17.372 [2024-11-15 09:34:05.822166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:17.372 [2024-11-15 09:34:05.823888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:17.372 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.372 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:17.372 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.372 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.372 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.372 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.372 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.372 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.372 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.372 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.372 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.372 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.372 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.372 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.372 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.630 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.630 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.630 "name": "Existed_Raid", 00:15:17.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.630 "strip_size_kb": 64, 00:15:17.630 "state": "configuring", 00:15:17.630 "raid_level": "raid5f", 00:15:17.630 "superblock": false, 00:15:17.630 "num_base_bdevs": 3, 00:15:17.630 "num_base_bdevs_discovered": 2, 00:15:17.630 "num_base_bdevs_operational": 3, 00:15:17.630 "base_bdevs_list": [ 00:15:17.630 { 00:15:17.630 "name": "BaseBdev1", 00:15:17.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.630 "is_configured": false, 00:15:17.630 "data_offset": 0, 00:15:17.630 "data_size": 0 00:15:17.630 }, 00:15:17.630 { 00:15:17.630 "name": "BaseBdev2", 00:15:17.630 "uuid": "e1148d2d-beec-4067-84f6-00672d80c6be", 00:15:17.630 "is_configured": true, 00:15:17.630 "data_offset": 0, 00:15:17.630 "data_size": 65536 00:15:17.630 }, 00:15:17.630 { 00:15:17.630 "name": "BaseBdev3", 00:15:17.630 "uuid": "ab2500b6-f4ac-420c-8aac-4304f38d5766", 00:15:17.630 "is_configured": true, 00:15:17.630 "data_offset": 0, 00:15:17.630 "data_size": 65536 00:15:17.630 } 00:15:17.630 ] 00:15:17.630 }' 00:15:17.630 09:34:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.630 09:34:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.905 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:17.905 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.905 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.905 [2024-11-15 09:34:06.309311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:17.905 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.905 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:17.905 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.905 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.905 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.905 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.905 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.905 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.905 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.905 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.905 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.905 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.905 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.905 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.905 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.905 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.175 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.175 "name": "Existed_Raid", 00:15:18.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.175 "strip_size_kb": 64, 00:15:18.175 "state": "configuring", 00:15:18.175 "raid_level": "raid5f", 00:15:18.175 "superblock": false, 00:15:18.175 "num_base_bdevs": 3, 00:15:18.175 "num_base_bdevs_discovered": 1, 00:15:18.175 "num_base_bdevs_operational": 3, 00:15:18.175 "base_bdevs_list": [ 00:15:18.175 { 00:15:18.175 "name": "BaseBdev1", 00:15:18.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.175 "is_configured": false, 00:15:18.175 "data_offset": 0, 00:15:18.175 "data_size": 0 00:15:18.175 }, 00:15:18.175 { 00:15:18.175 "name": null, 00:15:18.175 "uuid": "e1148d2d-beec-4067-84f6-00672d80c6be", 00:15:18.175 "is_configured": false, 00:15:18.175 "data_offset": 0, 00:15:18.175 "data_size": 65536 00:15:18.175 }, 00:15:18.175 { 00:15:18.175 "name": "BaseBdev3", 00:15:18.175 "uuid": "ab2500b6-f4ac-420c-8aac-4304f38d5766", 00:15:18.175 "is_configured": true, 00:15:18.175 "data_offset": 0, 00:15:18.175 "data_size": 65536 00:15:18.175 } 00:15:18.175 ] 00:15:18.175 }' 00:15:18.175 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.175 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.434 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.434 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.434 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.434 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:18.434 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.434 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:18.434 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:18.434 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.434 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.434 [2024-11-15 09:34:06.865949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:18.434 BaseBdev1 00:15:18.434 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.434 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:18.434 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:18.434 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:18.434 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:18.434 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:18.434 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:18.434 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:18.434 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.434 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.434 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.434 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:18.434 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.435 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.435 [ 00:15:18.435 { 00:15:18.435 "name": "BaseBdev1", 00:15:18.435 "aliases": [ 00:15:18.435 "67e45d17-3ef4-4839-b0c3-5b9486a25cbb" 00:15:18.435 ], 00:15:18.435 "product_name": "Malloc disk", 00:15:18.435 "block_size": 512, 00:15:18.435 "num_blocks": 65536, 00:15:18.435 "uuid": "67e45d17-3ef4-4839-b0c3-5b9486a25cbb", 00:15:18.435 "assigned_rate_limits": { 00:15:18.435 "rw_ios_per_sec": 0, 00:15:18.435 "rw_mbytes_per_sec": 0, 00:15:18.435 "r_mbytes_per_sec": 0, 00:15:18.435 "w_mbytes_per_sec": 0 00:15:18.435 }, 00:15:18.435 "claimed": true, 00:15:18.435 "claim_type": "exclusive_write", 00:15:18.435 "zoned": false, 00:15:18.435 "supported_io_types": { 00:15:18.435 "read": true, 00:15:18.435 "write": true, 00:15:18.435 "unmap": true, 00:15:18.435 "flush": true, 00:15:18.435 "reset": true, 00:15:18.435 "nvme_admin": false, 00:15:18.435 "nvme_io": false, 00:15:18.435 "nvme_io_md": false, 00:15:18.435 "write_zeroes": true, 00:15:18.435 "zcopy": true, 00:15:18.435 "get_zone_info": false, 00:15:18.435 "zone_management": false, 00:15:18.435 "zone_append": false, 00:15:18.435 "compare": false, 00:15:18.435 "compare_and_write": false, 00:15:18.435 "abort": true, 00:15:18.435 "seek_hole": false, 00:15:18.435 "seek_data": false, 00:15:18.435 "copy": true, 00:15:18.435 "nvme_iov_md": false 00:15:18.435 }, 00:15:18.435 "memory_domains": [ 00:15:18.435 { 00:15:18.435 "dma_device_id": "system", 00:15:18.435 "dma_device_type": 1 00:15:18.435 }, 00:15:18.435 { 00:15:18.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.694 "dma_device_type": 2 00:15:18.694 } 00:15:18.694 ], 00:15:18.694 "driver_specific": {} 00:15:18.694 } 00:15:18.694 ] 00:15:18.694 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.694 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:18.694 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:18.694 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.694 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.694 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.694 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.694 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.694 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.694 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.694 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.694 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.694 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.694 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.694 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.694 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.694 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.694 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.694 "name": "Existed_Raid", 00:15:18.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.694 "strip_size_kb": 64, 00:15:18.694 "state": "configuring", 00:15:18.694 "raid_level": "raid5f", 00:15:18.694 "superblock": false, 00:15:18.694 "num_base_bdevs": 3, 00:15:18.694 "num_base_bdevs_discovered": 2, 00:15:18.694 "num_base_bdevs_operational": 3, 00:15:18.694 "base_bdevs_list": [ 00:15:18.694 { 00:15:18.694 "name": "BaseBdev1", 00:15:18.694 "uuid": "67e45d17-3ef4-4839-b0c3-5b9486a25cbb", 00:15:18.694 "is_configured": true, 00:15:18.694 "data_offset": 0, 00:15:18.694 "data_size": 65536 00:15:18.694 }, 00:15:18.694 { 00:15:18.694 "name": null, 00:15:18.694 "uuid": "e1148d2d-beec-4067-84f6-00672d80c6be", 00:15:18.694 "is_configured": false, 00:15:18.694 "data_offset": 0, 00:15:18.694 "data_size": 65536 00:15:18.694 }, 00:15:18.694 { 00:15:18.694 "name": "BaseBdev3", 00:15:18.694 "uuid": "ab2500b6-f4ac-420c-8aac-4304f38d5766", 00:15:18.694 "is_configured": true, 00:15:18.694 "data_offset": 0, 00:15:18.694 "data_size": 65536 00:15:18.694 } 00:15:18.694 ] 00:15:18.694 }' 00:15:18.694 09:34:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.694 09:34:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.953 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.953 09:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.953 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:18.953 09:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.953 09:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.953 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:18.953 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:18.953 09:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.953 09:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.953 [2024-11-15 09:34:07.393115] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:18.953 09:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.953 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:18.953 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.953 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.953 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.953 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.953 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.954 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.954 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.954 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.954 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.954 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.954 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.954 09:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.954 09:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.212 09:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.212 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.212 "name": "Existed_Raid", 00:15:19.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.212 "strip_size_kb": 64, 00:15:19.212 "state": "configuring", 00:15:19.212 "raid_level": "raid5f", 00:15:19.212 "superblock": false, 00:15:19.212 "num_base_bdevs": 3, 00:15:19.212 "num_base_bdevs_discovered": 1, 00:15:19.212 "num_base_bdevs_operational": 3, 00:15:19.212 "base_bdevs_list": [ 00:15:19.212 { 00:15:19.212 "name": "BaseBdev1", 00:15:19.212 "uuid": "67e45d17-3ef4-4839-b0c3-5b9486a25cbb", 00:15:19.212 "is_configured": true, 00:15:19.212 "data_offset": 0, 00:15:19.212 "data_size": 65536 00:15:19.212 }, 00:15:19.212 { 00:15:19.212 "name": null, 00:15:19.212 "uuid": "e1148d2d-beec-4067-84f6-00672d80c6be", 00:15:19.212 "is_configured": false, 00:15:19.212 "data_offset": 0, 00:15:19.212 "data_size": 65536 00:15:19.212 }, 00:15:19.212 { 00:15:19.212 "name": null, 00:15:19.212 "uuid": "ab2500b6-f4ac-420c-8aac-4304f38d5766", 00:15:19.212 "is_configured": false, 00:15:19.212 "data_offset": 0, 00:15:19.212 "data_size": 65536 00:15:19.212 } 00:15:19.212 ] 00:15:19.212 }' 00:15:19.212 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.213 09:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.472 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.472 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:19.472 09:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.472 09:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.472 09:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.472 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:19.472 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:19.472 09:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.472 09:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.472 [2024-11-15 09:34:07.916277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:19.472 09:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.472 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:19.472 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.472 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.472 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.472 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.472 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.472 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.472 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.472 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.472 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.472 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.472 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.473 09:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.473 09:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.733 09:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.733 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.733 "name": "Existed_Raid", 00:15:19.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.733 "strip_size_kb": 64, 00:15:19.733 "state": "configuring", 00:15:19.733 "raid_level": "raid5f", 00:15:19.733 "superblock": false, 00:15:19.733 "num_base_bdevs": 3, 00:15:19.733 "num_base_bdevs_discovered": 2, 00:15:19.733 "num_base_bdevs_operational": 3, 00:15:19.733 "base_bdevs_list": [ 00:15:19.733 { 00:15:19.733 "name": "BaseBdev1", 00:15:19.733 "uuid": "67e45d17-3ef4-4839-b0c3-5b9486a25cbb", 00:15:19.733 "is_configured": true, 00:15:19.733 "data_offset": 0, 00:15:19.733 "data_size": 65536 00:15:19.733 }, 00:15:19.733 { 00:15:19.733 "name": null, 00:15:19.733 "uuid": "e1148d2d-beec-4067-84f6-00672d80c6be", 00:15:19.733 "is_configured": false, 00:15:19.733 "data_offset": 0, 00:15:19.733 "data_size": 65536 00:15:19.733 }, 00:15:19.733 { 00:15:19.733 "name": "BaseBdev3", 00:15:19.733 "uuid": "ab2500b6-f4ac-420c-8aac-4304f38d5766", 00:15:19.733 "is_configured": true, 00:15:19.733 "data_offset": 0, 00:15:19.733 "data_size": 65536 00:15:19.733 } 00:15:19.733 ] 00:15:19.733 }' 00:15:19.733 09:34:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.733 09:34:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.990 09:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.990 09:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:19.990 09:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.990 09:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.990 09:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.990 09:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:19.990 09:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:19.990 09:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.990 09:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.990 [2024-11-15 09:34:08.435454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:20.249 09:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.249 09:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:20.249 09:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.249 09:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.249 09:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.249 09:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.249 09:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.249 09:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.250 09:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.250 09:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.250 09:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.250 09:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.250 09:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.250 09:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.250 09:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.250 09:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.250 09:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.250 "name": "Existed_Raid", 00:15:20.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.250 "strip_size_kb": 64, 00:15:20.250 "state": "configuring", 00:15:20.250 "raid_level": "raid5f", 00:15:20.250 "superblock": false, 00:15:20.250 "num_base_bdevs": 3, 00:15:20.250 "num_base_bdevs_discovered": 1, 00:15:20.250 "num_base_bdevs_operational": 3, 00:15:20.250 "base_bdevs_list": [ 00:15:20.250 { 00:15:20.250 "name": null, 00:15:20.250 "uuid": "67e45d17-3ef4-4839-b0c3-5b9486a25cbb", 00:15:20.250 "is_configured": false, 00:15:20.250 "data_offset": 0, 00:15:20.250 "data_size": 65536 00:15:20.250 }, 00:15:20.250 { 00:15:20.250 "name": null, 00:15:20.250 "uuid": "e1148d2d-beec-4067-84f6-00672d80c6be", 00:15:20.250 "is_configured": false, 00:15:20.250 "data_offset": 0, 00:15:20.250 "data_size": 65536 00:15:20.250 }, 00:15:20.250 { 00:15:20.250 "name": "BaseBdev3", 00:15:20.250 "uuid": "ab2500b6-f4ac-420c-8aac-4304f38d5766", 00:15:20.250 "is_configured": true, 00:15:20.250 "data_offset": 0, 00:15:20.250 "data_size": 65536 00:15:20.250 } 00:15:20.250 ] 00:15:20.250 }' 00:15:20.250 09:34:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.250 09:34:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.816 [2024-11-15 09:34:09.058996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.816 "name": "Existed_Raid", 00:15:20.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.816 "strip_size_kb": 64, 00:15:20.816 "state": "configuring", 00:15:20.816 "raid_level": "raid5f", 00:15:20.816 "superblock": false, 00:15:20.816 "num_base_bdevs": 3, 00:15:20.816 "num_base_bdevs_discovered": 2, 00:15:20.816 "num_base_bdevs_operational": 3, 00:15:20.816 "base_bdevs_list": [ 00:15:20.816 { 00:15:20.816 "name": null, 00:15:20.816 "uuid": "67e45d17-3ef4-4839-b0c3-5b9486a25cbb", 00:15:20.816 "is_configured": false, 00:15:20.816 "data_offset": 0, 00:15:20.816 "data_size": 65536 00:15:20.816 }, 00:15:20.816 { 00:15:20.816 "name": "BaseBdev2", 00:15:20.816 "uuid": "e1148d2d-beec-4067-84f6-00672d80c6be", 00:15:20.816 "is_configured": true, 00:15:20.816 "data_offset": 0, 00:15:20.816 "data_size": 65536 00:15:20.816 }, 00:15:20.816 { 00:15:20.816 "name": "BaseBdev3", 00:15:20.816 "uuid": "ab2500b6-f4ac-420c-8aac-4304f38d5766", 00:15:20.816 "is_configured": true, 00:15:20.816 "data_offset": 0, 00:15:20.816 "data_size": 65536 00:15:20.816 } 00:15:20.816 ] 00:15:20.816 }' 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.816 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 67e45d17-3ef4-4839-b0c3-5b9486a25cbb 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.384 [2024-11-15 09:34:09.686737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:21.384 [2024-11-15 09:34:09.686799] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:21.384 [2024-11-15 09:34:09.686811] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:21.384 [2024-11-15 09:34:09.687108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:21.384 [2024-11-15 09:34:09.692958] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:21.384 [2024-11-15 09:34:09.692982] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:21.384 [2024-11-15 09:34:09.693269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.384 NewBaseBdev 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.384 [ 00:15:21.384 { 00:15:21.384 "name": "NewBaseBdev", 00:15:21.384 "aliases": [ 00:15:21.384 "67e45d17-3ef4-4839-b0c3-5b9486a25cbb" 00:15:21.384 ], 00:15:21.384 "product_name": "Malloc disk", 00:15:21.384 "block_size": 512, 00:15:21.384 "num_blocks": 65536, 00:15:21.384 "uuid": "67e45d17-3ef4-4839-b0c3-5b9486a25cbb", 00:15:21.384 "assigned_rate_limits": { 00:15:21.384 "rw_ios_per_sec": 0, 00:15:21.384 "rw_mbytes_per_sec": 0, 00:15:21.384 "r_mbytes_per_sec": 0, 00:15:21.384 "w_mbytes_per_sec": 0 00:15:21.384 }, 00:15:21.384 "claimed": true, 00:15:21.384 "claim_type": "exclusive_write", 00:15:21.384 "zoned": false, 00:15:21.384 "supported_io_types": { 00:15:21.384 "read": true, 00:15:21.384 "write": true, 00:15:21.384 "unmap": true, 00:15:21.384 "flush": true, 00:15:21.384 "reset": true, 00:15:21.384 "nvme_admin": false, 00:15:21.384 "nvme_io": false, 00:15:21.384 "nvme_io_md": false, 00:15:21.384 "write_zeroes": true, 00:15:21.384 "zcopy": true, 00:15:21.384 "get_zone_info": false, 00:15:21.384 "zone_management": false, 00:15:21.384 "zone_append": false, 00:15:21.384 "compare": false, 00:15:21.384 "compare_and_write": false, 00:15:21.384 "abort": true, 00:15:21.384 "seek_hole": false, 00:15:21.384 "seek_data": false, 00:15:21.384 "copy": true, 00:15:21.384 "nvme_iov_md": false 00:15:21.384 }, 00:15:21.384 "memory_domains": [ 00:15:21.384 { 00:15:21.384 "dma_device_id": "system", 00:15:21.384 "dma_device_type": 1 00:15:21.384 }, 00:15:21.384 { 00:15:21.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.384 "dma_device_type": 2 00:15:21.384 } 00:15:21.384 ], 00:15:21.384 "driver_specific": {} 00:15:21.384 } 00:15:21.384 ] 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.384 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.384 "name": "Existed_Raid", 00:15:21.384 "uuid": "d52cbfcb-198b-4e94-896f-a06c0a613efa", 00:15:21.384 "strip_size_kb": 64, 00:15:21.384 "state": "online", 00:15:21.384 "raid_level": "raid5f", 00:15:21.384 "superblock": false, 00:15:21.384 "num_base_bdevs": 3, 00:15:21.384 "num_base_bdevs_discovered": 3, 00:15:21.384 "num_base_bdevs_operational": 3, 00:15:21.384 "base_bdevs_list": [ 00:15:21.384 { 00:15:21.384 "name": "NewBaseBdev", 00:15:21.384 "uuid": "67e45d17-3ef4-4839-b0c3-5b9486a25cbb", 00:15:21.384 "is_configured": true, 00:15:21.384 "data_offset": 0, 00:15:21.384 "data_size": 65536 00:15:21.384 }, 00:15:21.384 { 00:15:21.384 "name": "BaseBdev2", 00:15:21.384 "uuid": "e1148d2d-beec-4067-84f6-00672d80c6be", 00:15:21.384 "is_configured": true, 00:15:21.384 "data_offset": 0, 00:15:21.384 "data_size": 65536 00:15:21.384 }, 00:15:21.384 { 00:15:21.384 "name": "BaseBdev3", 00:15:21.385 "uuid": "ab2500b6-f4ac-420c-8aac-4304f38d5766", 00:15:21.385 "is_configured": true, 00:15:21.385 "data_offset": 0, 00:15:21.385 "data_size": 65536 00:15:21.385 } 00:15:21.385 ] 00:15:21.385 }' 00:15:21.385 09:34:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.385 09:34:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.954 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:21.954 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:21.954 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:21.954 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:21.954 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:21.954 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:21.954 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:21.954 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:21.954 09:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.954 09:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.954 [2024-11-15 09:34:10.248243] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:21.954 09:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.954 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:21.954 "name": "Existed_Raid", 00:15:21.954 "aliases": [ 00:15:21.954 "d52cbfcb-198b-4e94-896f-a06c0a613efa" 00:15:21.954 ], 00:15:21.954 "product_name": "Raid Volume", 00:15:21.954 "block_size": 512, 00:15:21.954 "num_blocks": 131072, 00:15:21.954 "uuid": "d52cbfcb-198b-4e94-896f-a06c0a613efa", 00:15:21.954 "assigned_rate_limits": { 00:15:21.954 "rw_ios_per_sec": 0, 00:15:21.954 "rw_mbytes_per_sec": 0, 00:15:21.954 "r_mbytes_per_sec": 0, 00:15:21.954 "w_mbytes_per_sec": 0 00:15:21.954 }, 00:15:21.954 "claimed": false, 00:15:21.954 "zoned": false, 00:15:21.954 "supported_io_types": { 00:15:21.954 "read": true, 00:15:21.954 "write": true, 00:15:21.954 "unmap": false, 00:15:21.954 "flush": false, 00:15:21.954 "reset": true, 00:15:21.954 "nvme_admin": false, 00:15:21.954 "nvme_io": false, 00:15:21.954 "nvme_io_md": false, 00:15:21.954 "write_zeroes": true, 00:15:21.954 "zcopy": false, 00:15:21.954 "get_zone_info": false, 00:15:21.954 "zone_management": false, 00:15:21.954 "zone_append": false, 00:15:21.954 "compare": false, 00:15:21.954 "compare_and_write": false, 00:15:21.954 "abort": false, 00:15:21.954 "seek_hole": false, 00:15:21.954 "seek_data": false, 00:15:21.954 "copy": false, 00:15:21.954 "nvme_iov_md": false 00:15:21.954 }, 00:15:21.954 "driver_specific": { 00:15:21.954 "raid": { 00:15:21.954 "uuid": "d52cbfcb-198b-4e94-896f-a06c0a613efa", 00:15:21.954 "strip_size_kb": 64, 00:15:21.954 "state": "online", 00:15:21.954 "raid_level": "raid5f", 00:15:21.954 "superblock": false, 00:15:21.954 "num_base_bdevs": 3, 00:15:21.954 "num_base_bdevs_discovered": 3, 00:15:21.954 "num_base_bdevs_operational": 3, 00:15:21.954 "base_bdevs_list": [ 00:15:21.954 { 00:15:21.954 "name": "NewBaseBdev", 00:15:21.954 "uuid": "67e45d17-3ef4-4839-b0c3-5b9486a25cbb", 00:15:21.954 "is_configured": true, 00:15:21.954 "data_offset": 0, 00:15:21.954 "data_size": 65536 00:15:21.954 }, 00:15:21.954 { 00:15:21.954 "name": "BaseBdev2", 00:15:21.954 "uuid": "e1148d2d-beec-4067-84f6-00672d80c6be", 00:15:21.954 "is_configured": true, 00:15:21.954 "data_offset": 0, 00:15:21.954 "data_size": 65536 00:15:21.954 }, 00:15:21.954 { 00:15:21.954 "name": "BaseBdev3", 00:15:21.954 "uuid": "ab2500b6-f4ac-420c-8aac-4304f38d5766", 00:15:21.954 "is_configured": true, 00:15:21.954 "data_offset": 0, 00:15:21.954 "data_size": 65536 00:15:21.954 } 00:15:21.954 ] 00:15:21.954 } 00:15:21.954 } 00:15:21.954 }' 00:15:21.954 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:21.954 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:21.954 BaseBdev2 00:15:21.954 BaseBdev3' 00:15:21.954 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.954 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:21.954 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.954 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.954 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:21.954 09:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.954 09:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.954 09:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.954 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:21.954 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:21.954 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.215 [2024-11-15 09:34:10.523485] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:22.215 [2024-11-15 09:34:10.523533] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:22.215 [2024-11-15 09:34:10.523627] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.215 [2024-11-15 09:34:10.523962] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.215 [2024-11-15 09:34:10.523987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80301 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 80301 ']' 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 80301 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80301 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:22.215 killing process with pid 80301 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80301' 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 80301 00:15:22.215 [2024-11-15 09:34:10.571309] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:22.215 09:34:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 80301 00:15:22.475 [2024-11-15 09:34:10.894652] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:23.856 ************************************ 00:15:23.856 END TEST raid5f_state_function_test 00:15:23.856 ************************************ 00:15:23.856 09:34:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:23.856 00:15:23.856 real 0m11.169s 00:15:23.856 user 0m17.665s 00:15:23.856 sys 0m2.165s 00:15:23.856 09:34:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:23.856 09:34:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.856 09:34:12 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:23.856 09:34:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:23.856 09:34:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:23.856 09:34:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:23.856 ************************************ 00:15:23.856 START TEST raid5f_state_function_test_sb 00:15:23.856 ************************************ 00:15:23.856 09:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 true 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80928 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80928' 00:15:23.857 Process raid pid: 80928 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80928 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 80928 ']' 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:23.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:23.857 09:34:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.857 [2024-11-15 09:34:12.235172] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:15:23.857 [2024-11-15 09:34:12.235337] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.117 [2024-11-15 09:34:12.420925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.117 [2024-11-15 09:34:12.536671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.376 [2024-11-15 09:34:12.747018] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.376 [2024-11-15 09:34:12.747063] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.944 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:24.944 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:15:24.944 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:24.944 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.944 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.944 [2024-11-15 09:34:13.159235] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:24.944 [2024-11-15 09:34:13.159302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:24.944 [2024-11-15 09:34:13.159315] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.944 [2024-11-15 09:34:13.159327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.944 [2024-11-15 09:34:13.159335] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:24.944 [2024-11-15 09:34:13.159345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:24.944 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.944 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:24.944 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.944 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.944 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.944 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.944 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.944 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.944 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.944 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.944 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.944 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.944 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.944 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.944 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.944 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.944 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.945 "name": "Existed_Raid", 00:15:24.945 "uuid": "7df172ef-bcde-4635-a9b0-cedddfd6a0cc", 00:15:24.945 "strip_size_kb": 64, 00:15:24.945 "state": "configuring", 00:15:24.945 "raid_level": "raid5f", 00:15:24.945 "superblock": true, 00:15:24.945 "num_base_bdevs": 3, 00:15:24.945 "num_base_bdevs_discovered": 0, 00:15:24.945 "num_base_bdevs_operational": 3, 00:15:24.945 "base_bdevs_list": [ 00:15:24.945 { 00:15:24.945 "name": "BaseBdev1", 00:15:24.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.945 "is_configured": false, 00:15:24.945 "data_offset": 0, 00:15:24.945 "data_size": 0 00:15:24.945 }, 00:15:24.945 { 00:15:24.945 "name": "BaseBdev2", 00:15:24.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.945 "is_configured": false, 00:15:24.945 "data_offset": 0, 00:15:24.945 "data_size": 0 00:15:24.945 }, 00:15:24.945 { 00:15:24.945 "name": "BaseBdev3", 00:15:24.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.945 "is_configured": false, 00:15:24.945 "data_offset": 0, 00:15:24.945 "data_size": 0 00:15:24.945 } 00:15:24.945 ] 00:15:24.945 }' 00:15:24.945 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.945 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.203 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:25.203 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.203 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.203 [2024-11-15 09:34:13.618396] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:25.203 [2024-11-15 09:34:13.618450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:25.203 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.203 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:25.203 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.203 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.203 [2024-11-15 09:34:13.630362] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:25.203 [2024-11-15 09:34:13.630414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:25.203 [2024-11-15 09:34:13.630424] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:25.203 [2024-11-15 09:34:13.630434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:25.203 [2024-11-15 09:34:13.630440] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:25.203 [2024-11-15 09:34:13.630451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:25.203 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.203 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:25.203 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.203 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.463 [2024-11-15 09:34:13.684103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:25.463 BaseBdev1 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.463 [ 00:15:25.463 { 00:15:25.463 "name": "BaseBdev1", 00:15:25.463 "aliases": [ 00:15:25.463 "b605e001-c092-46ac-894b-bacbcdfc6372" 00:15:25.463 ], 00:15:25.463 "product_name": "Malloc disk", 00:15:25.463 "block_size": 512, 00:15:25.463 "num_blocks": 65536, 00:15:25.463 "uuid": "b605e001-c092-46ac-894b-bacbcdfc6372", 00:15:25.463 "assigned_rate_limits": { 00:15:25.463 "rw_ios_per_sec": 0, 00:15:25.463 "rw_mbytes_per_sec": 0, 00:15:25.463 "r_mbytes_per_sec": 0, 00:15:25.463 "w_mbytes_per_sec": 0 00:15:25.463 }, 00:15:25.463 "claimed": true, 00:15:25.463 "claim_type": "exclusive_write", 00:15:25.463 "zoned": false, 00:15:25.463 "supported_io_types": { 00:15:25.463 "read": true, 00:15:25.463 "write": true, 00:15:25.463 "unmap": true, 00:15:25.463 "flush": true, 00:15:25.463 "reset": true, 00:15:25.463 "nvme_admin": false, 00:15:25.463 "nvme_io": false, 00:15:25.463 "nvme_io_md": false, 00:15:25.463 "write_zeroes": true, 00:15:25.463 "zcopy": true, 00:15:25.463 "get_zone_info": false, 00:15:25.463 "zone_management": false, 00:15:25.463 "zone_append": false, 00:15:25.463 "compare": false, 00:15:25.463 "compare_and_write": false, 00:15:25.463 "abort": true, 00:15:25.463 "seek_hole": false, 00:15:25.463 "seek_data": false, 00:15:25.463 "copy": true, 00:15:25.463 "nvme_iov_md": false 00:15:25.463 }, 00:15:25.463 "memory_domains": [ 00:15:25.463 { 00:15:25.463 "dma_device_id": "system", 00:15:25.463 "dma_device_type": 1 00:15:25.463 }, 00:15:25.463 { 00:15:25.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.463 "dma_device_type": 2 00:15:25.463 } 00:15:25.463 ], 00:15:25.463 "driver_specific": {} 00:15:25.463 } 00:15:25.463 ] 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.463 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.463 "name": "Existed_Raid", 00:15:25.463 "uuid": "a3e766bf-cce2-49cc-b3d3-6f73dcbc0f0c", 00:15:25.463 "strip_size_kb": 64, 00:15:25.463 "state": "configuring", 00:15:25.463 "raid_level": "raid5f", 00:15:25.463 "superblock": true, 00:15:25.464 "num_base_bdevs": 3, 00:15:25.464 "num_base_bdevs_discovered": 1, 00:15:25.464 "num_base_bdevs_operational": 3, 00:15:25.464 "base_bdevs_list": [ 00:15:25.464 { 00:15:25.464 "name": "BaseBdev1", 00:15:25.464 "uuid": "b605e001-c092-46ac-894b-bacbcdfc6372", 00:15:25.464 "is_configured": true, 00:15:25.464 "data_offset": 2048, 00:15:25.464 "data_size": 63488 00:15:25.464 }, 00:15:25.464 { 00:15:25.464 "name": "BaseBdev2", 00:15:25.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.464 "is_configured": false, 00:15:25.464 "data_offset": 0, 00:15:25.464 "data_size": 0 00:15:25.464 }, 00:15:25.464 { 00:15:25.464 "name": "BaseBdev3", 00:15:25.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.464 "is_configured": false, 00:15:25.464 "data_offset": 0, 00:15:25.464 "data_size": 0 00:15:25.464 } 00:15:25.464 ] 00:15:25.464 }' 00:15:25.464 09:34:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.464 09:34:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.722 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:25.722 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.722 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.722 [2024-11-15 09:34:14.187413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:25.722 [2024-11-15 09:34:14.187487] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:25.982 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.982 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:25.982 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.982 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.982 [2024-11-15 09:34:14.195471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:25.982 [2024-11-15 09:34:14.197592] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:25.982 [2024-11-15 09:34:14.197644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:25.982 [2024-11-15 09:34:14.197655] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:25.982 [2024-11-15 09:34:14.197664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:25.982 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.982 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:25.982 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:25.982 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:25.982 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.982 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.982 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.982 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.982 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.982 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.982 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.982 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.982 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.982 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.983 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.983 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.983 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.983 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.983 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.983 "name": "Existed_Raid", 00:15:25.983 "uuid": "b7f4d25d-47f2-4b48-a42b-a9e285f55d9a", 00:15:25.983 "strip_size_kb": 64, 00:15:25.983 "state": "configuring", 00:15:25.983 "raid_level": "raid5f", 00:15:25.983 "superblock": true, 00:15:25.983 "num_base_bdevs": 3, 00:15:25.983 "num_base_bdevs_discovered": 1, 00:15:25.983 "num_base_bdevs_operational": 3, 00:15:25.983 "base_bdevs_list": [ 00:15:25.983 { 00:15:25.983 "name": "BaseBdev1", 00:15:25.983 "uuid": "b605e001-c092-46ac-894b-bacbcdfc6372", 00:15:25.983 "is_configured": true, 00:15:25.983 "data_offset": 2048, 00:15:25.983 "data_size": 63488 00:15:25.983 }, 00:15:25.983 { 00:15:25.983 "name": "BaseBdev2", 00:15:25.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.983 "is_configured": false, 00:15:25.983 "data_offset": 0, 00:15:25.983 "data_size": 0 00:15:25.983 }, 00:15:25.983 { 00:15:25.983 "name": "BaseBdev3", 00:15:25.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.983 "is_configured": false, 00:15:25.983 "data_offset": 0, 00:15:25.983 "data_size": 0 00:15:25.983 } 00:15:25.983 ] 00:15:25.983 }' 00:15:25.983 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.983 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.241 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:26.241 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.241 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.500 [2024-11-15 09:34:14.730086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:26.500 BaseBdev2 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.500 [ 00:15:26.500 { 00:15:26.500 "name": "BaseBdev2", 00:15:26.500 "aliases": [ 00:15:26.500 "76f26ed9-9f8c-427f-9f7e-ac54f300b15e" 00:15:26.500 ], 00:15:26.500 "product_name": "Malloc disk", 00:15:26.500 "block_size": 512, 00:15:26.500 "num_blocks": 65536, 00:15:26.500 "uuid": "76f26ed9-9f8c-427f-9f7e-ac54f300b15e", 00:15:26.500 "assigned_rate_limits": { 00:15:26.500 "rw_ios_per_sec": 0, 00:15:26.500 "rw_mbytes_per_sec": 0, 00:15:26.500 "r_mbytes_per_sec": 0, 00:15:26.500 "w_mbytes_per_sec": 0 00:15:26.500 }, 00:15:26.500 "claimed": true, 00:15:26.500 "claim_type": "exclusive_write", 00:15:26.500 "zoned": false, 00:15:26.500 "supported_io_types": { 00:15:26.500 "read": true, 00:15:26.500 "write": true, 00:15:26.500 "unmap": true, 00:15:26.500 "flush": true, 00:15:26.500 "reset": true, 00:15:26.500 "nvme_admin": false, 00:15:26.500 "nvme_io": false, 00:15:26.500 "nvme_io_md": false, 00:15:26.500 "write_zeroes": true, 00:15:26.500 "zcopy": true, 00:15:26.500 "get_zone_info": false, 00:15:26.500 "zone_management": false, 00:15:26.500 "zone_append": false, 00:15:26.500 "compare": false, 00:15:26.500 "compare_and_write": false, 00:15:26.500 "abort": true, 00:15:26.500 "seek_hole": false, 00:15:26.500 "seek_data": false, 00:15:26.500 "copy": true, 00:15:26.500 "nvme_iov_md": false 00:15:26.500 }, 00:15:26.500 "memory_domains": [ 00:15:26.500 { 00:15:26.500 "dma_device_id": "system", 00:15:26.500 "dma_device_type": 1 00:15:26.500 }, 00:15:26.500 { 00:15:26.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.500 "dma_device_type": 2 00:15:26.500 } 00:15:26.500 ], 00:15:26.500 "driver_specific": {} 00:15:26.500 } 00:15:26.500 ] 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.500 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.501 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.501 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.501 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.501 "name": "Existed_Raid", 00:15:26.501 "uuid": "b7f4d25d-47f2-4b48-a42b-a9e285f55d9a", 00:15:26.501 "strip_size_kb": 64, 00:15:26.501 "state": "configuring", 00:15:26.501 "raid_level": "raid5f", 00:15:26.501 "superblock": true, 00:15:26.501 "num_base_bdevs": 3, 00:15:26.501 "num_base_bdevs_discovered": 2, 00:15:26.501 "num_base_bdevs_operational": 3, 00:15:26.501 "base_bdevs_list": [ 00:15:26.501 { 00:15:26.501 "name": "BaseBdev1", 00:15:26.501 "uuid": "b605e001-c092-46ac-894b-bacbcdfc6372", 00:15:26.501 "is_configured": true, 00:15:26.501 "data_offset": 2048, 00:15:26.501 "data_size": 63488 00:15:26.501 }, 00:15:26.501 { 00:15:26.501 "name": "BaseBdev2", 00:15:26.501 "uuid": "76f26ed9-9f8c-427f-9f7e-ac54f300b15e", 00:15:26.501 "is_configured": true, 00:15:26.501 "data_offset": 2048, 00:15:26.501 "data_size": 63488 00:15:26.501 }, 00:15:26.501 { 00:15:26.501 "name": "BaseBdev3", 00:15:26.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.501 "is_configured": false, 00:15:26.501 "data_offset": 0, 00:15:26.501 "data_size": 0 00:15:26.501 } 00:15:26.501 ] 00:15:26.501 }' 00:15:26.501 09:34:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.501 09:34:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.066 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:27.066 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.066 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.066 [2024-11-15 09:34:15.319362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:27.066 [2024-11-15 09:34:15.319780] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:27.066 [2024-11-15 09:34:15.319810] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:27.066 [2024-11-15 09:34:15.320190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:27.066 BaseBdev3 00:15:27.066 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.066 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:27.066 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:27.066 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:27.066 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:27.066 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:27.066 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:27.066 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:27.066 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.066 [2024-11-15 09:34:15.326364] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:27.066 [2024-11-15 09:34:15.326452] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:27.066 [2024-11-15 09:34:15.326687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.066 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.066 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.066 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:27.066 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.066 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.066 [ 00:15:27.066 { 00:15:27.066 "name": "BaseBdev3", 00:15:27.066 "aliases": [ 00:15:27.066 "9d1182e1-11fc-47ad-befb-86bbf545ee18" 00:15:27.066 ], 00:15:27.066 "product_name": "Malloc disk", 00:15:27.066 "block_size": 512, 00:15:27.066 "num_blocks": 65536, 00:15:27.066 "uuid": "9d1182e1-11fc-47ad-befb-86bbf545ee18", 00:15:27.066 "assigned_rate_limits": { 00:15:27.066 "rw_ios_per_sec": 0, 00:15:27.066 "rw_mbytes_per_sec": 0, 00:15:27.066 "r_mbytes_per_sec": 0, 00:15:27.066 "w_mbytes_per_sec": 0 00:15:27.066 }, 00:15:27.066 "claimed": true, 00:15:27.066 "claim_type": "exclusive_write", 00:15:27.066 "zoned": false, 00:15:27.066 "supported_io_types": { 00:15:27.066 "read": true, 00:15:27.066 "write": true, 00:15:27.066 "unmap": true, 00:15:27.066 "flush": true, 00:15:27.066 "reset": true, 00:15:27.066 "nvme_admin": false, 00:15:27.066 "nvme_io": false, 00:15:27.066 "nvme_io_md": false, 00:15:27.066 "write_zeroes": true, 00:15:27.066 "zcopy": true, 00:15:27.066 "get_zone_info": false, 00:15:27.066 "zone_management": false, 00:15:27.066 "zone_append": false, 00:15:27.066 "compare": false, 00:15:27.067 "compare_and_write": false, 00:15:27.067 "abort": true, 00:15:27.067 "seek_hole": false, 00:15:27.067 "seek_data": false, 00:15:27.067 "copy": true, 00:15:27.067 "nvme_iov_md": false 00:15:27.067 }, 00:15:27.067 "memory_domains": [ 00:15:27.067 { 00:15:27.067 "dma_device_id": "system", 00:15:27.067 "dma_device_type": 1 00:15:27.067 }, 00:15:27.067 { 00:15:27.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.067 "dma_device_type": 2 00:15:27.067 } 00:15:27.067 ], 00:15:27.067 "driver_specific": {} 00:15:27.067 } 00:15:27.067 ] 00:15:27.067 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.067 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:27.067 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:27.067 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:27.067 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:27.067 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.067 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.067 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.067 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.067 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.067 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.067 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.067 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.067 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.067 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.067 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.067 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.067 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.067 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.067 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.067 "name": "Existed_Raid", 00:15:27.067 "uuid": "b7f4d25d-47f2-4b48-a42b-a9e285f55d9a", 00:15:27.067 "strip_size_kb": 64, 00:15:27.067 "state": "online", 00:15:27.067 "raid_level": "raid5f", 00:15:27.067 "superblock": true, 00:15:27.067 "num_base_bdevs": 3, 00:15:27.067 "num_base_bdevs_discovered": 3, 00:15:27.067 "num_base_bdevs_operational": 3, 00:15:27.067 "base_bdevs_list": [ 00:15:27.067 { 00:15:27.067 "name": "BaseBdev1", 00:15:27.067 "uuid": "b605e001-c092-46ac-894b-bacbcdfc6372", 00:15:27.067 "is_configured": true, 00:15:27.067 "data_offset": 2048, 00:15:27.067 "data_size": 63488 00:15:27.067 }, 00:15:27.067 { 00:15:27.067 "name": "BaseBdev2", 00:15:27.067 "uuid": "76f26ed9-9f8c-427f-9f7e-ac54f300b15e", 00:15:27.067 "is_configured": true, 00:15:27.067 "data_offset": 2048, 00:15:27.067 "data_size": 63488 00:15:27.067 }, 00:15:27.067 { 00:15:27.067 "name": "BaseBdev3", 00:15:27.067 "uuid": "9d1182e1-11fc-47ad-befb-86bbf545ee18", 00:15:27.067 "is_configured": true, 00:15:27.067 "data_offset": 2048, 00:15:27.067 "data_size": 63488 00:15:27.067 } 00:15:27.067 ] 00:15:27.067 }' 00:15:27.067 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.067 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.636 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:27.636 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:27.636 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:27.636 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:27.636 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:27.636 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:27.636 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:27.636 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:27.636 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.636 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.636 [2024-11-15 09:34:15.841294] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.636 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.636 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:27.636 "name": "Existed_Raid", 00:15:27.636 "aliases": [ 00:15:27.636 "b7f4d25d-47f2-4b48-a42b-a9e285f55d9a" 00:15:27.636 ], 00:15:27.636 "product_name": "Raid Volume", 00:15:27.636 "block_size": 512, 00:15:27.636 "num_blocks": 126976, 00:15:27.636 "uuid": "b7f4d25d-47f2-4b48-a42b-a9e285f55d9a", 00:15:27.636 "assigned_rate_limits": { 00:15:27.636 "rw_ios_per_sec": 0, 00:15:27.636 "rw_mbytes_per_sec": 0, 00:15:27.636 "r_mbytes_per_sec": 0, 00:15:27.636 "w_mbytes_per_sec": 0 00:15:27.636 }, 00:15:27.636 "claimed": false, 00:15:27.636 "zoned": false, 00:15:27.636 "supported_io_types": { 00:15:27.636 "read": true, 00:15:27.636 "write": true, 00:15:27.636 "unmap": false, 00:15:27.636 "flush": false, 00:15:27.636 "reset": true, 00:15:27.636 "nvme_admin": false, 00:15:27.636 "nvme_io": false, 00:15:27.636 "nvme_io_md": false, 00:15:27.636 "write_zeroes": true, 00:15:27.636 "zcopy": false, 00:15:27.636 "get_zone_info": false, 00:15:27.636 "zone_management": false, 00:15:27.636 "zone_append": false, 00:15:27.636 "compare": false, 00:15:27.636 "compare_and_write": false, 00:15:27.636 "abort": false, 00:15:27.636 "seek_hole": false, 00:15:27.636 "seek_data": false, 00:15:27.636 "copy": false, 00:15:27.636 "nvme_iov_md": false 00:15:27.636 }, 00:15:27.636 "driver_specific": { 00:15:27.636 "raid": { 00:15:27.636 "uuid": "b7f4d25d-47f2-4b48-a42b-a9e285f55d9a", 00:15:27.636 "strip_size_kb": 64, 00:15:27.636 "state": "online", 00:15:27.636 "raid_level": "raid5f", 00:15:27.636 "superblock": true, 00:15:27.636 "num_base_bdevs": 3, 00:15:27.636 "num_base_bdevs_discovered": 3, 00:15:27.636 "num_base_bdevs_operational": 3, 00:15:27.636 "base_bdevs_list": [ 00:15:27.636 { 00:15:27.636 "name": "BaseBdev1", 00:15:27.636 "uuid": "b605e001-c092-46ac-894b-bacbcdfc6372", 00:15:27.636 "is_configured": true, 00:15:27.636 "data_offset": 2048, 00:15:27.636 "data_size": 63488 00:15:27.636 }, 00:15:27.636 { 00:15:27.636 "name": "BaseBdev2", 00:15:27.636 "uuid": "76f26ed9-9f8c-427f-9f7e-ac54f300b15e", 00:15:27.636 "is_configured": true, 00:15:27.636 "data_offset": 2048, 00:15:27.636 "data_size": 63488 00:15:27.636 }, 00:15:27.636 { 00:15:27.636 "name": "BaseBdev3", 00:15:27.636 "uuid": "9d1182e1-11fc-47ad-befb-86bbf545ee18", 00:15:27.636 "is_configured": true, 00:15:27.636 "data_offset": 2048, 00:15:27.636 "data_size": 63488 00:15:27.636 } 00:15:27.636 ] 00:15:27.636 } 00:15:27.636 } 00:15:27.636 }' 00:15:27.636 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:27.636 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:27.636 BaseBdev2 00:15:27.636 BaseBdev3' 00:15:27.636 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.636 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:27.636 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.636 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:27.636 09:34:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.636 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.636 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.636 09:34:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.636 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.636 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.636 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.636 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:27.636 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.636 09:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.636 09:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.636 09:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.636 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.636 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.636 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.636 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:27.636 09:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.637 09:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.637 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.637 09:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.896 [2024-11-15 09:34:16.128668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.896 "name": "Existed_Raid", 00:15:27.896 "uuid": "b7f4d25d-47f2-4b48-a42b-a9e285f55d9a", 00:15:27.896 "strip_size_kb": 64, 00:15:27.896 "state": "online", 00:15:27.896 "raid_level": "raid5f", 00:15:27.896 "superblock": true, 00:15:27.896 "num_base_bdevs": 3, 00:15:27.896 "num_base_bdevs_discovered": 2, 00:15:27.896 "num_base_bdevs_operational": 2, 00:15:27.896 "base_bdevs_list": [ 00:15:27.896 { 00:15:27.896 "name": null, 00:15:27.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.896 "is_configured": false, 00:15:27.896 "data_offset": 0, 00:15:27.896 "data_size": 63488 00:15:27.896 }, 00:15:27.896 { 00:15:27.896 "name": "BaseBdev2", 00:15:27.896 "uuid": "76f26ed9-9f8c-427f-9f7e-ac54f300b15e", 00:15:27.896 "is_configured": true, 00:15:27.896 "data_offset": 2048, 00:15:27.896 "data_size": 63488 00:15:27.896 }, 00:15:27.896 { 00:15:27.896 "name": "BaseBdev3", 00:15:27.896 "uuid": "9d1182e1-11fc-47ad-befb-86bbf545ee18", 00:15:27.896 "is_configured": true, 00:15:27.896 "data_offset": 2048, 00:15:27.896 "data_size": 63488 00:15:27.896 } 00:15:27.896 ] 00:15:27.896 }' 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.896 09:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.466 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:28.466 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:28.466 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:28.466 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.466 09:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.466 09:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.466 09:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.466 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:28.466 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:28.466 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:28.466 09:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.466 09:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.466 [2024-11-15 09:34:16.765235] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:28.466 [2024-11-15 09:34:16.765488] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:28.466 [2024-11-15 09:34:16.867573] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:28.466 09:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.466 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:28.466 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:28.466 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.466 09:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.466 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:28.466 09:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.466 09:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.466 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:28.466 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:28.466 09:34:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:28.466 09:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.466 09:34:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.466 [2024-11-15 09:34:16.927557] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:28.466 [2024-11-15 09:34:16.927726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.725 BaseBdev2 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.725 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.725 [ 00:15:28.725 { 00:15:28.725 "name": "BaseBdev2", 00:15:28.725 "aliases": [ 00:15:28.725 "6d0f22c0-34ad-4b38-9ee3-8cf96b040bb0" 00:15:28.725 ], 00:15:28.725 "product_name": "Malloc disk", 00:15:28.725 "block_size": 512, 00:15:28.725 "num_blocks": 65536, 00:15:28.725 "uuid": "6d0f22c0-34ad-4b38-9ee3-8cf96b040bb0", 00:15:28.725 "assigned_rate_limits": { 00:15:28.725 "rw_ios_per_sec": 0, 00:15:28.725 "rw_mbytes_per_sec": 0, 00:15:28.725 "r_mbytes_per_sec": 0, 00:15:28.725 "w_mbytes_per_sec": 0 00:15:28.725 }, 00:15:28.725 "claimed": false, 00:15:28.725 "zoned": false, 00:15:28.725 "supported_io_types": { 00:15:28.725 "read": true, 00:15:28.725 "write": true, 00:15:28.725 "unmap": true, 00:15:28.725 "flush": true, 00:15:28.725 "reset": true, 00:15:28.725 "nvme_admin": false, 00:15:28.725 "nvme_io": false, 00:15:28.725 "nvme_io_md": false, 00:15:28.725 "write_zeroes": true, 00:15:28.725 "zcopy": true, 00:15:28.725 "get_zone_info": false, 00:15:28.725 "zone_management": false, 00:15:28.725 "zone_append": false, 00:15:28.725 "compare": false, 00:15:28.725 "compare_and_write": false, 00:15:28.725 "abort": true, 00:15:28.725 "seek_hole": false, 00:15:28.725 "seek_data": false, 00:15:28.725 "copy": true, 00:15:28.725 "nvme_iov_md": false 00:15:28.725 }, 00:15:28.725 "memory_domains": [ 00:15:28.725 { 00:15:28.725 "dma_device_id": "system", 00:15:28.725 "dma_device_type": 1 00:15:28.725 }, 00:15:28.725 { 00:15:28.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.726 "dma_device_type": 2 00:15:28.726 } 00:15:28.726 ], 00:15:28.726 "driver_specific": {} 00:15:28.726 } 00:15:28.726 ] 00:15:28.726 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.726 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:28.726 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:28.726 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:28.726 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:28.726 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.726 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.985 BaseBdev3 00:15:28.985 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.985 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:28.985 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:28.985 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:28.985 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:28.985 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:28.985 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:28.985 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:28.985 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.985 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.985 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.985 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:28.985 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.985 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.985 [ 00:15:28.985 { 00:15:28.985 "name": "BaseBdev3", 00:15:28.985 "aliases": [ 00:15:28.985 "1980dd81-9d8e-41c6-9484-a1753b352dcf" 00:15:28.985 ], 00:15:28.985 "product_name": "Malloc disk", 00:15:28.985 "block_size": 512, 00:15:28.985 "num_blocks": 65536, 00:15:28.985 "uuid": "1980dd81-9d8e-41c6-9484-a1753b352dcf", 00:15:28.985 "assigned_rate_limits": { 00:15:28.985 "rw_ios_per_sec": 0, 00:15:28.985 "rw_mbytes_per_sec": 0, 00:15:28.985 "r_mbytes_per_sec": 0, 00:15:28.985 "w_mbytes_per_sec": 0 00:15:28.985 }, 00:15:28.985 "claimed": false, 00:15:28.985 "zoned": false, 00:15:28.985 "supported_io_types": { 00:15:28.985 "read": true, 00:15:28.985 "write": true, 00:15:28.985 "unmap": true, 00:15:28.985 "flush": true, 00:15:28.985 "reset": true, 00:15:28.985 "nvme_admin": false, 00:15:28.985 "nvme_io": false, 00:15:28.985 "nvme_io_md": false, 00:15:28.985 "write_zeroes": true, 00:15:28.985 "zcopy": true, 00:15:28.985 "get_zone_info": false, 00:15:28.985 "zone_management": false, 00:15:28.985 "zone_append": false, 00:15:28.985 "compare": false, 00:15:28.985 "compare_and_write": false, 00:15:28.985 "abort": true, 00:15:28.985 "seek_hole": false, 00:15:28.985 "seek_data": false, 00:15:28.985 "copy": true, 00:15:28.985 "nvme_iov_md": false 00:15:28.985 }, 00:15:28.985 "memory_domains": [ 00:15:28.985 { 00:15:28.985 "dma_device_id": "system", 00:15:28.985 "dma_device_type": 1 00:15:28.985 }, 00:15:28.985 { 00:15:28.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.985 "dma_device_type": 2 00:15:28.985 } 00:15:28.985 ], 00:15:28.985 "driver_specific": {} 00:15:28.985 } 00:15:28.985 ] 00:15:28.985 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.985 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:28.985 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:28.985 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:28.985 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:28.985 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.986 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.986 [2024-11-15 09:34:17.259373] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:28.986 [2024-11-15 09:34:17.259514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:28.986 [2024-11-15 09:34:17.259559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:28.986 [2024-11-15 09:34:17.261523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:28.986 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.986 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:28.986 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.986 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.986 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.986 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.986 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.986 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.986 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.986 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.986 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.986 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.986 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.986 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.986 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.986 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.986 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.986 "name": "Existed_Raid", 00:15:28.986 "uuid": "fd85b243-3db7-4d04-a8d5-3ef00b417ce3", 00:15:28.986 "strip_size_kb": 64, 00:15:28.986 "state": "configuring", 00:15:28.986 "raid_level": "raid5f", 00:15:28.986 "superblock": true, 00:15:28.986 "num_base_bdevs": 3, 00:15:28.986 "num_base_bdevs_discovered": 2, 00:15:28.986 "num_base_bdevs_operational": 3, 00:15:28.986 "base_bdevs_list": [ 00:15:28.986 { 00:15:28.986 "name": "BaseBdev1", 00:15:28.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.986 "is_configured": false, 00:15:28.986 "data_offset": 0, 00:15:28.986 "data_size": 0 00:15:28.986 }, 00:15:28.986 { 00:15:28.986 "name": "BaseBdev2", 00:15:28.986 "uuid": "6d0f22c0-34ad-4b38-9ee3-8cf96b040bb0", 00:15:28.986 "is_configured": true, 00:15:28.986 "data_offset": 2048, 00:15:28.986 "data_size": 63488 00:15:28.986 }, 00:15:28.986 { 00:15:28.986 "name": "BaseBdev3", 00:15:28.986 "uuid": "1980dd81-9d8e-41c6-9484-a1753b352dcf", 00:15:28.986 "is_configured": true, 00:15:28.986 "data_offset": 2048, 00:15:28.986 "data_size": 63488 00:15:28.986 } 00:15:28.986 ] 00:15:28.986 }' 00:15:28.986 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.986 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.246 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:29.246 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.246 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.505 [2024-11-15 09:34:17.714597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:29.505 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.505 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:29.506 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.506 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.506 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.506 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.506 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.506 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.506 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.506 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.506 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.506 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.506 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.506 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.506 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.506 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.506 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.506 "name": "Existed_Raid", 00:15:29.506 "uuid": "fd85b243-3db7-4d04-a8d5-3ef00b417ce3", 00:15:29.506 "strip_size_kb": 64, 00:15:29.506 "state": "configuring", 00:15:29.506 "raid_level": "raid5f", 00:15:29.506 "superblock": true, 00:15:29.506 "num_base_bdevs": 3, 00:15:29.506 "num_base_bdevs_discovered": 1, 00:15:29.506 "num_base_bdevs_operational": 3, 00:15:29.506 "base_bdevs_list": [ 00:15:29.506 { 00:15:29.506 "name": "BaseBdev1", 00:15:29.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.506 "is_configured": false, 00:15:29.506 "data_offset": 0, 00:15:29.506 "data_size": 0 00:15:29.506 }, 00:15:29.506 { 00:15:29.506 "name": null, 00:15:29.506 "uuid": "6d0f22c0-34ad-4b38-9ee3-8cf96b040bb0", 00:15:29.506 "is_configured": false, 00:15:29.506 "data_offset": 0, 00:15:29.506 "data_size": 63488 00:15:29.506 }, 00:15:29.506 { 00:15:29.506 "name": "BaseBdev3", 00:15:29.506 "uuid": "1980dd81-9d8e-41c6-9484-a1753b352dcf", 00:15:29.506 "is_configured": true, 00:15:29.506 "data_offset": 2048, 00:15:29.506 "data_size": 63488 00:15:29.506 } 00:15:29.506 ] 00:15:29.506 }' 00:15:29.506 09:34:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.506 09:34:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.784 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.784 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:29.784 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.784 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.784 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.785 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:29.785 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:29.785 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.785 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.785 [2024-11-15 09:34:18.235139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:30.057 BaseBdev1 00:15:30.057 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.057 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:30.057 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:30.057 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:30.057 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:30.057 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:30.057 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:30.057 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:30.057 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.057 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.057 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.057 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:30.057 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.057 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.057 [ 00:15:30.057 { 00:15:30.057 "name": "BaseBdev1", 00:15:30.057 "aliases": [ 00:15:30.057 "736482d9-6195-4653-84bf-13d4088f9603" 00:15:30.057 ], 00:15:30.057 "product_name": "Malloc disk", 00:15:30.057 "block_size": 512, 00:15:30.057 "num_blocks": 65536, 00:15:30.057 "uuid": "736482d9-6195-4653-84bf-13d4088f9603", 00:15:30.057 "assigned_rate_limits": { 00:15:30.057 "rw_ios_per_sec": 0, 00:15:30.057 "rw_mbytes_per_sec": 0, 00:15:30.057 "r_mbytes_per_sec": 0, 00:15:30.057 "w_mbytes_per_sec": 0 00:15:30.057 }, 00:15:30.057 "claimed": true, 00:15:30.057 "claim_type": "exclusive_write", 00:15:30.057 "zoned": false, 00:15:30.057 "supported_io_types": { 00:15:30.057 "read": true, 00:15:30.057 "write": true, 00:15:30.057 "unmap": true, 00:15:30.057 "flush": true, 00:15:30.057 "reset": true, 00:15:30.057 "nvme_admin": false, 00:15:30.057 "nvme_io": false, 00:15:30.057 "nvme_io_md": false, 00:15:30.057 "write_zeroes": true, 00:15:30.057 "zcopy": true, 00:15:30.057 "get_zone_info": false, 00:15:30.057 "zone_management": false, 00:15:30.057 "zone_append": false, 00:15:30.057 "compare": false, 00:15:30.057 "compare_and_write": false, 00:15:30.057 "abort": true, 00:15:30.057 "seek_hole": false, 00:15:30.057 "seek_data": false, 00:15:30.057 "copy": true, 00:15:30.057 "nvme_iov_md": false 00:15:30.057 }, 00:15:30.057 "memory_domains": [ 00:15:30.057 { 00:15:30.057 "dma_device_id": "system", 00:15:30.057 "dma_device_type": 1 00:15:30.057 }, 00:15:30.057 { 00:15:30.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.057 "dma_device_type": 2 00:15:30.057 } 00:15:30.057 ], 00:15:30.057 "driver_specific": {} 00:15:30.057 } 00:15:30.057 ] 00:15:30.057 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.058 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:30.058 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:30.058 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.058 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.058 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.058 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.058 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.058 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.058 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.058 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.058 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.058 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.058 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.058 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.058 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.058 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.058 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.058 "name": "Existed_Raid", 00:15:30.058 "uuid": "fd85b243-3db7-4d04-a8d5-3ef00b417ce3", 00:15:30.058 "strip_size_kb": 64, 00:15:30.058 "state": "configuring", 00:15:30.058 "raid_level": "raid5f", 00:15:30.058 "superblock": true, 00:15:30.058 "num_base_bdevs": 3, 00:15:30.058 "num_base_bdevs_discovered": 2, 00:15:30.058 "num_base_bdevs_operational": 3, 00:15:30.058 "base_bdevs_list": [ 00:15:30.058 { 00:15:30.058 "name": "BaseBdev1", 00:15:30.058 "uuid": "736482d9-6195-4653-84bf-13d4088f9603", 00:15:30.058 "is_configured": true, 00:15:30.058 "data_offset": 2048, 00:15:30.058 "data_size": 63488 00:15:30.058 }, 00:15:30.058 { 00:15:30.058 "name": null, 00:15:30.058 "uuid": "6d0f22c0-34ad-4b38-9ee3-8cf96b040bb0", 00:15:30.058 "is_configured": false, 00:15:30.058 "data_offset": 0, 00:15:30.058 "data_size": 63488 00:15:30.058 }, 00:15:30.058 { 00:15:30.058 "name": "BaseBdev3", 00:15:30.058 "uuid": "1980dd81-9d8e-41c6-9484-a1753b352dcf", 00:15:30.058 "is_configured": true, 00:15:30.058 "data_offset": 2048, 00:15:30.058 "data_size": 63488 00:15:30.058 } 00:15:30.058 ] 00:15:30.058 }' 00:15:30.058 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.058 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.317 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.317 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:30.317 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.317 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.317 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.576 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:30.576 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:30.576 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.576 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.576 [2024-11-15 09:34:18.790290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:30.576 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.576 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:30.576 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.576 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.576 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.576 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.576 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.576 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.576 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.576 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.576 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.576 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.576 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.576 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.576 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.576 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.576 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.576 "name": "Existed_Raid", 00:15:30.576 "uuid": "fd85b243-3db7-4d04-a8d5-3ef00b417ce3", 00:15:30.576 "strip_size_kb": 64, 00:15:30.576 "state": "configuring", 00:15:30.576 "raid_level": "raid5f", 00:15:30.576 "superblock": true, 00:15:30.576 "num_base_bdevs": 3, 00:15:30.576 "num_base_bdevs_discovered": 1, 00:15:30.576 "num_base_bdevs_operational": 3, 00:15:30.576 "base_bdevs_list": [ 00:15:30.576 { 00:15:30.576 "name": "BaseBdev1", 00:15:30.576 "uuid": "736482d9-6195-4653-84bf-13d4088f9603", 00:15:30.577 "is_configured": true, 00:15:30.577 "data_offset": 2048, 00:15:30.577 "data_size": 63488 00:15:30.577 }, 00:15:30.577 { 00:15:30.577 "name": null, 00:15:30.577 "uuid": "6d0f22c0-34ad-4b38-9ee3-8cf96b040bb0", 00:15:30.577 "is_configured": false, 00:15:30.577 "data_offset": 0, 00:15:30.577 "data_size": 63488 00:15:30.577 }, 00:15:30.577 { 00:15:30.577 "name": null, 00:15:30.577 "uuid": "1980dd81-9d8e-41c6-9484-a1753b352dcf", 00:15:30.577 "is_configured": false, 00:15:30.577 "data_offset": 0, 00:15:30.577 "data_size": 63488 00:15:30.577 } 00:15:30.577 ] 00:15:30.577 }' 00:15:30.577 09:34:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.577 09:34:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.835 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.835 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:30.835 09:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.835 09:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.835 09:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.835 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:30.835 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:30.835 09:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.835 09:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.835 [2024-11-15 09:34:19.265540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:30.835 09:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.835 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:30.835 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.835 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.835 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.835 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.835 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.835 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.835 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.835 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.835 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.835 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.835 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.835 09:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.835 09:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.093 09:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.093 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.093 "name": "Existed_Raid", 00:15:31.093 "uuid": "fd85b243-3db7-4d04-a8d5-3ef00b417ce3", 00:15:31.093 "strip_size_kb": 64, 00:15:31.093 "state": "configuring", 00:15:31.093 "raid_level": "raid5f", 00:15:31.093 "superblock": true, 00:15:31.093 "num_base_bdevs": 3, 00:15:31.093 "num_base_bdevs_discovered": 2, 00:15:31.093 "num_base_bdevs_operational": 3, 00:15:31.093 "base_bdevs_list": [ 00:15:31.093 { 00:15:31.094 "name": "BaseBdev1", 00:15:31.094 "uuid": "736482d9-6195-4653-84bf-13d4088f9603", 00:15:31.094 "is_configured": true, 00:15:31.094 "data_offset": 2048, 00:15:31.094 "data_size": 63488 00:15:31.094 }, 00:15:31.094 { 00:15:31.094 "name": null, 00:15:31.094 "uuid": "6d0f22c0-34ad-4b38-9ee3-8cf96b040bb0", 00:15:31.094 "is_configured": false, 00:15:31.094 "data_offset": 0, 00:15:31.094 "data_size": 63488 00:15:31.094 }, 00:15:31.094 { 00:15:31.094 "name": "BaseBdev3", 00:15:31.094 "uuid": "1980dd81-9d8e-41c6-9484-a1753b352dcf", 00:15:31.094 "is_configured": true, 00:15:31.094 "data_offset": 2048, 00:15:31.094 "data_size": 63488 00:15:31.094 } 00:15:31.094 ] 00:15:31.094 }' 00:15:31.094 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.094 09:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.352 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.352 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:31.352 09:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.352 09:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.352 09:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.352 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:31.352 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:31.352 09:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.352 09:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.352 [2024-11-15 09:34:19.772725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:31.611 09:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.611 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:31.611 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.611 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.611 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.611 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.611 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:31.611 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.611 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.611 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.611 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.611 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.611 09:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.611 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.611 09:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.611 09:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.611 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.611 "name": "Existed_Raid", 00:15:31.611 "uuid": "fd85b243-3db7-4d04-a8d5-3ef00b417ce3", 00:15:31.611 "strip_size_kb": 64, 00:15:31.611 "state": "configuring", 00:15:31.611 "raid_level": "raid5f", 00:15:31.611 "superblock": true, 00:15:31.611 "num_base_bdevs": 3, 00:15:31.611 "num_base_bdevs_discovered": 1, 00:15:31.611 "num_base_bdevs_operational": 3, 00:15:31.611 "base_bdevs_list": [ 00:15:31.611 { 00:15:31.611 "name": null, 00:15:31.611 "uuid": "736482d9-6195-4653-84bf-13d4088f9603", 00:15:31.611 "is_configured": false, 00:15:31.611 "data_offset": 0, 00:15:31.611 "data_size": 63488 00:15:31.611 }, 00:15:31.611 { 00:15:31.611 "name": null, 00:15:31.611 "uuid": "6d0f22c0-34ad-4b38-9ee3-8cf96b040bb0", 00:15:31.611 "is_configured": false, 00:15:31.611 "data_offset": 0, 00:15:31.611 "data_size": 63488 00:15:31.611 }, 00:15:31.611 { 00:15:31.611 "name": "BaseBdev3", 00:15:31.611 "uuid": "1980dd81-9d8e-41c6-9484-a1753b352dcf", 00:15:31.611 "is_configured": true, 00:15:31.611 "data_offset": 2048, 00:15:31.611 "data_size": 63488 00:15:31.611 } 00:15:31.611 ] 00:15:31.611 }' 00:15:31.611 09:34:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.611 09:34:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.177 [2024-11-15 09:34:20.418009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.177 "name": "Existed_Raid", 00:15:32.177 "uuid": "fd85b243-3db7-4d04-a8d5-3ef00b417ce3", 00:15:32.177 "strip_size_kb": 64, 00:15:32.177 "state": "configuring", 00:15:32.177 "raid_level": "raid5f", 00:15:32.177 "superblock": true, 00:15:32.177 "num_base_bdevs": 3, 00:15:32.177 "num_base_bdevs_discovered": 2, 00:15:32.177 "num_base_bdevs_operational": 3, 00:15:32.177 "base_bdevs_list": [ 00:15:32.177 { 00:15:32.177 "name": null, 00:15:32.177 "uuid": "736482d9-6195-4653-84bf-13d4088f9603", 00:15:32.177 "is_configured": false, 00:15:32.177 "data_offset": 0, 00:15:32.177 "data_size": 63488 00:15:32.177 }, 00:15:32.177 { 00:15:32.177 "name": "BaseBdev2", 00:15:32.177 "uuid": "6d0f22c0-34ad-4b38-9ee3-8cf96b040bb0", 00:15:32.177 "is_configured": true, 00:15:32.177 "data_offset": 2048, 00:15:32.177 "data_size": 63488 00:15:32.177 }, 00:15:32.177 { 00:15:32.177 "name": "BaseBdev3", 00:15:32.177 "uuid": "1980dd81-9d8e-41c6-9484-a1753b352dcf", 00:15:32.177 "is_configured": true, 00:15:32.177 "data_offset": 2048, 00:15:32.177 "data_size": 63488 00:15:32.177 } 00:15:32.177 ] 00:15:32.177 }' 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.177 09:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.435 09:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.435 09:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.435 09:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.435 09:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:32.435 09:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.694 09:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:32.694 09:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.694 09:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.694 09:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.694 09:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:32.694 09:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.694 09:34:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 736482d9-6195-4653-84bf-13d4088f9603 00:15:32.694 09:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.694 09:34:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.694 [2024-11-15 09:34:21.023469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:32.694 [2024-11-15 09:34:21.023756] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:32.694 [2024-11-15 09:34:21.023774] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:32.694 [2024-11-15 09:34:21.024145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:32.694 NewBaseBdev 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.694 [2024-11-15 09:34:21.030887] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:32.694 [2024-11-15 09:34:21.031008] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:32.694 [2024-11-15 09:34:21.031255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.694 [ 00:15:32.694 { 00:15:32.694 "name": "NewBaseBdev", 00:15:32.694 "aliases": [ 00:15:32.694 "736482d9-6195-4653-84bf-13d4088f9603" 00:15:32.694 ], 00:15:32.694 "product_name": "Malloc disk", 00:15:32.694 "block_size": 512, 00:15:32.694 "num_blocks": 65536, 00:15:32.694 "uuid": "736482d9-6195-4653-84bf-13d4088f9603", 00:15:32.694 "assigned_rate_limits": { 00:15:32.694 "rw_ios_per_sec": 0, 00:15:32.694 "rw_mbytes_per_sec": 0, 00:15:32.694 "r_mbytes_per_sec": 0, 00:15:32.694 "w_mbytes_per_sec": 0 00:15:32.694 }, 00:15:32.694 "claimed": true, 00:15:32.694 "claim_type": "exclusive_write", 00:15:32.694 "zoned": false, 00:15:32.694 "supported_io_types": { 00:15:32.694 "read": true, 00:15:32.694 "write": true, 00:15:32.694 "unmap": true, 00:15:32.694 "flush": true, 00:15:32.694 "reset": true, 00:15:32.694 "nvme_admin": false, 00:15:32.694 "nvme_io": false, 00:15:32.694 "nvme_io_md": false, 00:15:32.694 "write_zeroes": true, 00:15:32.694 "zcopy": true, 00:15:32.694 "get_zone_info": false, 00:15:32.694 "zone_management": false, 00:15:32.694 "zone_append": false, 00:15:32.694 "compare": false, 00:15:32.694 "compare_and_write": false, 00:15:32.694 "abort": true, 00:15:32.694 "seek_hole": false, 00:15:32.694 "seek_data": false, 00:15:32.694 "copy": true, 00:15:32.694 "nvme_iov_md": false 00:15:32.694 }, 00:15:32.694 "memory_domains": [ 00:15:32.694 { 00:15:32.694 "dma_device_id": "system", 00:15:32.694 "dma_device_type": 1 00:15:32.694 }, 00:15:32.694 { 00:15:32.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.694 "dma_device_type": 2 00:15:32.694 } 00:15:32.694 ], 00:15:32.694 "driver_specific": {} 00:15:32.694 } 00:15:32.694 ] 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.694 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.694 "name": "Existed_Raid", 00:15:32.694 "uuid": "fd85b243-3db7-4d04-a8d5-3ef00b417ce3", 00:15:32.694 "strip_size_kb": 64, 00:15:32.694 "state": "online", 00:15:32.694 "raid_level": "raid5f", 00:15:32.694 "superblock": true, 00:15:32.694 "num_base_bdevs": 3, 00:15:32.694 "num_base_bdevs_discovered": 3, 00:15:32.694 "num_base_bdevs_operational": 3, 00:15:32.694 "base_bdevs_list": [ 00:15:32.694 { 00:15:32.694 "name": "NewBaseBdev", 00:15:32.695 "uuid": "736482d9-6195-4653-84bf-13d4088f9603", 00:15:32.695 "is_configured": true, 00:15:32.695 "data_offset": 2048, 00:15:32.695 "data_size": 63488 00:15:32.695 }, 00:15:32.695 { 00:15:32.695 "name": "BaseBdev2", 00:15:32.695 "uuid": "6d0f22c0-34ad-4b38-9ee3-8cf96b040bb0", 00:15:32.695 "is_configured": true, 00:15:32.695 "data_offset": 2048, 00:15:32.695 "data_size": 63488 00:15:32.695 }, 00:15:32.695 { 00:15:32.695 "name": "BaseBdev3", 00:15:32.695 "uuid": "1980dd81-9d8e-41c6-9484-a1753b352dcf", 00:15:32.695 "is_configured": true, 00:15:32.695 "data_offset": 2048, 00:15:32.695 "data_size": 63488 00:15:32.695 } 00:15:32.695 ] 00:15:32.695 }' 00:15:32.695 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.695 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.260 [2024-11-15 09:34:21.530003] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:33.260 "name": "Existed_Raid", 00:15:33.260 "aliases": [ 00:15:33.260 "fd85b243-3db7-4d04-a8d5-3ef00b417ce3" 00:15:33.260 ], 00:15:33.260 "product_name": "Raid Volume", 00:15:33.260 "block_size": 512, 00:15:33.260 "num_blocks": 126976, 00:15:33.260 "uuid": "fd85b243-3db7-4d04-a8d5-3ef00b417ce3", 00:15:33.260 "assigned_rate_limits": { 00:15:33.260 "rw_ios_per_sec": 0, 00:15:33.260 "rw_mbytes_per_sec": 0, 00:15:33.260 "r_mbytes_per_sec": 0, 00:15:33.260 "w_mbytes_per_sec": 0 00:15:33.260 }, 00:15:33.260 "claimed": false, 00:15:33.260 "zoned": false, 00:15:33.260 "supported_io_types": { 00:15:33.260 "read": true, 00:15:33.260 "write": true, 00:15:33.260 "unmap": false, 00:15:33.260 "flush": false, 00:15:33.260 "reset": true, 00:15:33.260 "nvme_admin": false, 00:15:33.260 "nvme_io": false, 00:15:33.260 "nvme_io_md": false, 00:15:33.260 "write_zeroes": true, 00:15:33.260 "zcopy": false, 00:15:33.260 "get_zone_info": false, 00:15:33.260 "zone_management": false, 00:15:33.260 "zone_append": false, 00:15:33.260 "compare": false, 00:15:33.260 "compare_and_write": false, 00:15:33.260 "abort": false, 00:15:33.260 "seek_hole": false, 00:15:33.260 "seek_data": false, 00:15:33.260 "copy": false, 00:15:33.260 "nvme_iov_md": false 00:15:33.260 }, 00:15:33.260 "driver_specific": { 00:15:33.260 "raid": { 00:15:33.260 "uuid": "fd85b243-3db7-4d04-a8d5-3ef00b417ce3", 00:15:33.260 "strip_size_kb": 64, 00:15:33.260 "state": "online", 00:15:33.260 "raid_level": "raid5f", 00:15:33.260 "superblock": true, 00:15:33.260 "num_base_bdevs": 3, 00:15:33.260 "num_base_bdevs_discovered": 3, 00:15:33.260 "num_base_bdevs_operational": 3, 00:15:33.260 "base_bdevs_list": [ 00:15:33.260 { 00:15:33.260 "name": "NewBaseBdev", 00:15:33.260 "uuid": "736482d9-6195-4653-84bf-13d4088f9603", 00:15:33.260 "is_configured": true, 00:15:33.260 "data_offset": 2048, 00:15:33.260 "data_size": 63488 00:15:33.260 }, 00:15:33.260 { 00:15:33.260 "name": "BaseBdev2", 00:15:33.260 "uuid": "6d0f22c0-34ad-4b38-9ee3-8cf96b040bb0", 00:15:33.260 "is_configured": true, 00:15:33.260 "data_offset": 2048, 00:15:33.260 "data_size": 63488 00:15:33.260 }, 00:15:33.260 { 00:15:33.260 "name": "BaseBdev3", 00:15:33.260 "uuid": "1980dd81-9d8e-41c6-9484-a1753b352dcf", 00:15:33.260 "is_configured": true, 00:15:33.260 "data_offset": 2048, 00:15:33.260 "data_size": 63488 00:15:33.260 } 00:15:33.260 ] 00:15:33.260 } 00:15:33.260 } 00:15:33.260 }' 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:33.260 BaseBdev2 00:15:33.260 BaseBdev3' 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.260 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.261 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:33.261 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.261 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.518 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.518 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.518 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.518 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:33.518 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.518 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.518 [2024-11-15 09:34:21.769367] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:33.518 [2024-11-15 09:34:21.769495] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:33.518 [2024-11-15 09:34:21.769599] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.518 [2024-11-15 09:34:21.769958] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:33.518 [2024-11-15 09:34:21.769976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:33.518 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.518 09:34:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80928 00:15:33.518 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 80928 ']' 00:15:33.518 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 80928 00:15:33.518 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:15:33.518 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:33.518 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80928 00:15:33.518 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:33.518 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:33.518 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80928' 00:15:33.518 killing process with pid 80928 00:15:33.518 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 80928 00:15:33.518 [2024-11-15 09:34:21.823942] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:33.518 09:34:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 80928 00:15:33.776 [2024-11-15 09:34:22.188662] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:35.150 09:34:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:35.150 00:15:35.150 real 0m11.349s 00:15:35.150 user 0m17.842s 00:15:35.150 sys 0m2.103s 00:15:35.150 ************************************ 00:15:35.150 END TEST raid5f_state_function_test_sb 00:15:35.150 ************************************ 00:15:35.150 09:34:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:35.150 09:34:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.150 09:34:23 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:35.150 09:34:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:35.150 09:34:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:35.150 09:34:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:35.150 ************************************ 00:15:35.150 START TEST raid5f_superblock_test 00:15:35.150 ************************************ 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 3 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81553 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81553 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 81553 ']' 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:35.150 09:34:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.408 [2024-11-15 09:34:23.653636] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:15:35.408 [2024-11-15 09:34:23.653876] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81553 ] 00:15:35.408 [2024-11-15 09:34:23.818678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.667 [2024-11-15 09:34:23.950625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.927 [2024-11-15 09:34:24.182547] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:35.927 [2024-11-15 09:34:24.182719] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.186 malloc1 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.186 [2024-11-15 09:34:24.625671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:36.186 [2024-11-15 09:34:24.625749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.186 [2024-11-15 09:34:24.625778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:36.186 [2024-11-15 09:34:24.625791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.186 [2024-11-15 09:34:24.628267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.186 [2024-11-15 09:34:24.628311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:36.186 pt1 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.186 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.445 malloc2 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.445 [2024-11-15 09:34:24.686724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:36.445 [2024-11-15 09:34:24.686899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.445 [2024-11-15 09:34:24.686948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:36.445 [2024-11-15 09:34:24.686986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.445 [2024-11-15 09:34:24.689295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.445 [2024-11-15 09:34:24.689370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:36.445 pt2 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.445 malloc3 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.445 [2024-11-15 09:34:24.760028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:36.445 [2024-11-15 09:34:24.760097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.445 [2024-11-15 09:34:24.760122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:36.445 [2024-11-15 09:34:24.760134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.445 [2024-11-15 09:34:24.762356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.445 [2024-11-15 09:34:24.762395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:36.445 pt3 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.445 [2024-11-15 09:34:24.772085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:36.445 [2024-11-15 09:34:24.774112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:36.445 [2024-11-15 09:34:24.774184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:36.445 [2024-11-15 09:34:24.774375] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:36.445 [2024-11-15 09:34:24.774401] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:36.445 [2024-11-15 09:34:24.774673] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:36.445 [2024-11-15 09:34:24.781190] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:36.445 [2024-11-15 09:34:24.781213] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:36.445 [2024-11-15 09:34:24.781439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.445 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.446 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.446 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.446 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.446 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.446 "name": "raid_bdev1", 00:15:36.446 "uuid": "1eb0657a-cb87-4bc8-9993-b4aca4e3254f", 00:15:36.446 "strip_size_kb": 64, 00:15:36.446 "state": "online", 00:15:36.446 "raid_level": "raid5f", 00:15:36.446 "superblock": true, 00:15:36.446 "num_base_bdevs": 3, 00:15:36.446 "num_base_bdevs_discovered": 3, 00:15:36.446 "num_base_bdevs_operational": 3, 00:15:36.446 "base_bdevs_list": [ 00:15:36.446 { 00:15:36.446 "name": "pt1", 00:15:36.446 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:36.446 "is_configured": true, 00:15:36.446 "data_offset": 2048, 00:15:36.446 "data_size": 63488 00:15:36.446 }, 00:15:36.446 { 00:15:36.446 "name": "pt2", 00:15:36.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.446 "is_configured": true, 00:15:36.446 "data_offset": 2048, 00:15:36.446 "data_size": 63488 00:15:36.446 }, 00:15:36.446 { 00:15:36.446 "name": "pt3", 00:15:36.446 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:36.446 "is_configured": true, 00:15:36.446 "data_offset": 2048, 00:15:36.446 "data_size": 63488 00:15:36.446 } 00:15:36.446 ] 00:15:36.446 }' 00:15:36.446 09:34:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.446 09:34:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:37.015 [2024-11-15 09:34:25.228280] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:37.015 "name": "raid_bdev1", 00:15:37.015 "aliases": [ 00:15:37.015 "1eb0657a-cb87-4bc8-9993-b4aca4e3254f" 00:15:37.015 ], 00:15:37.015 "product_name": "Raid Volume", 00:15:37.015 "block_size": 512, 00:15:37.015 "num_blocks": 126976, 00:15:37.015 "uuid": "1eb0657a-cb87-4bc8-9993-b4aca4e3254f", 00:15:37.015 "assigned_rate_limits": { 00:15:37.015 "rw_ios_per_sec": 0, 00:15:37.015 "rw_mbytes_per_sec": 0, 00:15:37.015 "r_mbytes_per_sec": 0, 00:15:37.015 "w_mbytes_per_sec": 0 00:15:37.015 }, 00:15:37.015 "claimed": false, 00:15:37.015 "zoned": false, 00:15:37.015 "supported_io_types": { 00:15:37.015 "read": true, 00:15:37.015 "write": true, 00:15:37.015 "unmap": false, 00:15:37.015 "flush": false, 00:15:37.015 "reset": true, 00:15:37.015 "nvme_admin": false, 00:15:37.015 "nvme_io": false, 00:15:37.015 "nvme_io_md": false, 00:15:37.015 "write_zeroes": true, 00:15:37.015 "zcopy": false, 00:15:37.015 "get_zone_info": false, 00:15:37.015 "zone_management": false, 00:15:37.015 "zone_append": false, 00:15:37.015 "compare": false, 00:15:37.015 "compare_and_write": false, 00:15:37.015 "abort": false, 00:15:37.015 "seek_hole": false, 00:15:37.015 "seek_data": false, 00:15:37.015 "copy": false, 00:15:37.015 "nvme_iov_md": false 00:15:37.015 }, 00:15:37.015 "driver_specific": { 00:15:37.015 "raid": { 00:15:37.015 "uuid": "1eb0657a-cb87-4bc8-9993-b4aca4e3254f", 00:15:37.015 "strip_size_kb": 64, 00:15:37.015 "state": "online", 00:15:37.015 "raid_level": "raid5f", 00:15:37.015 "superblock": true, 00:15:37.015 "num_base_bdevs": 3, 00:15:37.015 "num_base_bdevs_discovered": 3, 00:15:37.015 "num_base_bdevs_operational": 3, 00:15:37.015 "base_bdevs_list": [ 00:15:37.015 { 00:15:37.015 "name": "pt1", 00:15:37.015 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:37.015 "is_configured": true, 00:15:37.015 "data_offset": 2048, 00:15:37.015 "data_size": 63488 00:15:37.015 }, 00:15:37.015 { 00:15:37.015 "name": "pt2", 00:15:37.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:37.015 "is_configured": true, 00:15:37.015 "data_offset": 2048, 00:15:37.015 "data_size": 63488 00:15:37.015 }, 00:15:37.015 { 00:15:37.015 "name": "pt3", 00:15:37.015 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:37.015 "is_configured": true, 00:15:37.015 "data_offset": 2048, 00:15:37.015 "data_size": 63488 00:15:37.015 } 00:15:37.015 ] 00:15:37.015 } 00:15:37.015 } 00:15:37.015 }' 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:37.015 pt2 00:15:37.015 pt3' 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.015 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:37.016 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:37.276 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.276 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.276 [2024-11-15 09:34:25.487749] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.276 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.276 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1eb0657a-cb87-4bc8-9993-b4aca4e3254f 00:15:37.276 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1eb0657a-cb87-4bc8-9993-b4aca4e3254f ']' 00:15:37.276 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:37.276 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.276 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.276 [2024-11-15 09:34:25.515480] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:37.276 [2024-11-15 09:34:25.515517] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.276 [2024-11-15 09:34:25.515604] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.276 [2024-11-15 09:34:25.515689] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.276 [2024-11-15 09:34:25.515708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:37.276 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.276 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.276 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:37.276 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.276 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.276 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.276 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:37.276 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:37.276 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:37.276 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.277 [2024-11-15 09:34:25.659320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:37.277 [2024-11-15 09:34:25.661459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:37.277 [2024-11-15 09:34:25.661533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:37.277 [2024-11-15 09:34:25.661591] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:37.277 [2024-11-15 09:34:25.661654] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:37.277 [2024-11-15 09:34:25.661676] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:37.277 [2024-11-15 09:34:25.661694] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:37.277 [2024-11-15 09:34:25.661705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:37.277 request: 00:15:37.277 { 00:15:37.277 "name": "raid_bdev1", 00:15:37.277 "raid_level": "raid5f", 00:15:37.277 "base_bdevs": [ 00:15:37.277 "malloc1", 00:15:37.277 "malloc2", 00:15:37.277 "malloc3" 00:15:37.277 ], 00:15:37.277 "strip_size_kb": 64, 00:15:37.277 "superblock": false, 00:15:37.277 "method": "bdev_raid_create", 00:15:37.277 "req_id": 1 00:15:37.277 } 00:15:37.277 Got JSON-RPC error response 00:15:37.277 response: 00:15:37.277 { 00:15:37.277 "code": -17, 00:15:37.277 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:37.277 } 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.277 [2024-11-15 09:34:25.723148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:37.277 [2024-11-15 09:34:25.723225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.277 [2024-11-15 09:34:25.723248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:37.277 [2024-11-15 09:34:25.723259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.277 [2024-11-15 09:34:25.725754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.277 [2024-11-15 09:34:25.725799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:37.277 [2024-11-15 09:34:25.725904] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:37.277 [2024-11-15 09:34:25.725961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:37.277 pt1 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.277 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.537 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.537 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.537 "name": "raid_bdev1", 00:15:37.537 "uuid": "1eb0657a-cb87-4bc8-9993-b4aca4e3254f", 00:15:37.537 "strip_size_kb": 64, 00:15:37.537 "state": "configuring", 00:15:37.537 "raid_level": "raid5f", 00:15:37.537 "superblock": true, 00:15:37.537 "num_base_bdevs": 3, 00:15:37.537 "num_base_bdevs_discovered": 1, 00:15:37.537 "num_base_bdevs_operational": 3, 00:15:37.537 "base_bdevs_list": [ 00:15:37.537 { 00:15:37.537 "name": "pt1", 00:15:37.537 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:37.537 "is_configured": true, 00:15:37.537 "data_offset": 2048, 00:15:37.537 "data_size": 63488 00:15:37.537 }, 00:15:37.537 { 00:15:37.537 "name": null, 00:15:37.537 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:37.537 "is_configured": false, 00:15:37.537 "data_offset": 2048, 00:15:37.537 "data_size": 63488 00:15:37.537 }, 00:15:37.537 { 00:15:37.537 "name": null, 00:15:37.537 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:37.537 "is_configured": false, 00:15:37.537 "data_offset": 2048, 00:15:37.537 "data_size": 63488 00:15:37.537 } 00:15:37.537 ] 00:15:37.537 }' 00:15:37.537 09:34:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.537 09:34:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.794 [2024-11-15 09:34:26.146490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:37.794 [2024-11-15 09:34:26.146580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.794 [2024-11-15 09:34:26.146606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:37.794 [2024-11-15 09:34:26.146617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.794 [2024-11-15 09:34:26.147124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.794 [2024-11-15 09:34:26.147167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:37.794 [2024-11-15 09:34:26.147266] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:37.794 [2024-11-15 09:34:26.147297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:37.794 pt2 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.794 [2024-11-15 09:34:26.158464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.794 "name": "raid_bdev1", 00:15:37.794 "uuid": "1eb0657a-cb87-4bc8-9993-b4aca4e3254f", 00:15:37.794 "strip_size_kb": 64, 00:15:37.794 "state": "configuring", 00:15:37.794 "raid_level": "raid5f", 00:15:37.794 "superblock": true, 00:15:37.794 "num_base_bdevs": 3, 00:15:37.794 "num_base_bdevs_discovered": 1, 00:15:37.794 "num_base_bdevs_operational": 3, 00:15:37.794 "base_bdevs_list": [ 00:15:37.794 { 00:15:37.794 "name": "pt1", 00:15:37.794 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:37.794 "is_configured": true, 00:15:37.794 "data_offset": 2048, 00:15:37.794 "data_size": 63488 00:15:37.794 }, 00:15:37.794 { 00:15:37.794 "name": null, 00:15:37.794 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:37.794 "is_configured": false, 00:15:37.794 "data_offset": 0, 00:15:37.794 "data_size": 63488 00:15:37.794 }, 00:15:37.794 { 00:15:37.794 "name": null, 00:15:37.794 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:37.794 "is_configured": false, 00:15:37.794 "data_offset": 2048, 00:15:37.794 "data_size": 63488 00:15:37.794 } 00:15:37.794 ] 00:15:37.794 }' 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.794 09:34:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.383 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:38.383 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:38.383 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:38.383 09:34:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.383 09:34:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.383 [2024-11-15 09:34:26.565765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:38.383 [2024-11-15 09:34:26.565863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.383 [2024-11-15 09:34:26.565886] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:38.383 [2024-11-15 09:34:26.565900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.384 [2024-11-15 09:34:26.566439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.384 [2024-11-15 09:34:26.566475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:38.384 [2024-11-15 09:34:26.566565] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:38.384 [2024-11-15 09:34:26.566598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:38.384 pt2 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.384 [2024-11-15 09:34:26.573726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:38.384 [2024-11-15 09:34:26.573781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.384 [2024-11-15 09:34:26.573799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:38.384 [2024-11-15 09:34:26.573811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.384 [2024-11-15 09:34:26.574219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.384 [2024-11-15 09:34:26.574256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:38.384 [2024-11-15 09:34:26.574324] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:38.384 [2024-11-15 09:34:26.574351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:38.384 [2024-11-15 09:34:26.574478] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:38.384 [2024-11-15 09:34:26.574494] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:38.384 [2024-11-15 09:34:26.574737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:38.384 [2024-11-15 09:34:26.581053] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:38.384 [2024-11-15 09:34:26.581080] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:38.384 [2024-11-15 09:34:26.581303] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.384 pt3 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.384 "name": "raid_bdev1", 00:15:38.384 "uuid": "1eb0657a-cb87-4bc8-9993-b4aca4e3254f", 00:15:38.384 "strip_size_kb": 64, 00:15:38.384 "state": "online", 00:15:38.384 "raid_level": "raid5f", 00:15:38.384 "superblock": true, 00:15:38.384 "num_base_bdevs": 3, 00:15:38.384 "num_base_bdevs_discovered": 3, 00:15:38.384 "num_base_bdevs_operational": 3, 00:15:38.384 "base_bdevs_list": [ 00:15:38.384 { 00:15:38.384 "name": "pt1", 00:15:38.384 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:38.384 "is_configured": true, 00:15:38.384 "data_offset": 2048, 00:15:38.384 "data_size": 63488 00:15:38.384 }, 00:15:38.384 { 00:15:38.384 "name": "pt2", 00:15:38.384 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:38.384 "is_configured": true, 00:15:38.384 "data_offset": 2048, 00:15:38.384 "data_size": 63488 00:15:38.384 }, 00:15:38.384 { 00:15:38.384 "name": "pt3", 00:15:38.384 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:38.384 "is_configured": true, 00:15:38.384 "data_offset": 2048, 00:15:38.384 "data_size": 63488 00:15:38.384 } 00:15:38.384 ] 00:15:38.384 }' 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.384 09:34:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.643 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:38.643 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:38.644 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:38.644 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:38.644 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:38.644 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:38.644 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:38.644 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:38.644 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.644 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.644 [2024-11-15 09:34:27.028089] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:38.644 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.644 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:38.644 "name": "raid_bdev1", 00:15:38.644 "aliases": [ 00:15:38.644 "1eb0657a-cb87-4bc8-9993-b4aca4e3254f" 00:15:38.644 ], 00:15:38.644 "product_name": "Raid Volume", 00:15:38.644 "block_size": 512, 00:15:38.644 "num_blocks": 126976, 00:15:38.644 "uuid": "1eb0657a-cb87-4bc8-9993-b4aca4e3254f", 00:15:38.644 "assigned_rate_limits": { 00:15:38.644 "rw_ios_per_sec": 0, 00:15:38.644 "rw_mbytes_per_sec": 0, 00:15:38.644 "r_mbytes_per_sec": 0, 00:15:38.644 "w_mbytes_per_sec": 0 00:15:38.644 }, 00:15:38.644 "claimed": false, 00:15:38.644 "zoned": false, 00:15:38.644 "supported_io_types": { 00:15:38.644 "read": true, 00:15:38.644 "write": true, 00:15:38.644 "unmap": false, 00:15:38.644 "flush": false, 00:15:38.644 "reset": true, 00:15:38.644 "nvme_admin": false, 00:15:38.644 "nvme_io": false, 00:15:38.644 "nvme_io_md": false, 00:15:38.644 "write_zeroes": true, 00:15:38.644 "zcopy": false, 00:15:38.644 "get_zone_info": false, 00:15:38.644 "zone_management": false, 00:15:38.644 "zone_append": false, 00:15:38.644 "compare": false, 00:15:38.644 "compare_and_write": false, 00:15:38.644 "abort": false, 00:15:38.644 "seek_hole": false, 00:15:38.644 "seek_data": false, 00:15:38.644 "copy": false, 00:15:38.644 "nvme_iov_md": false 00:15:38.644 }, 00:15:38.644 "driver_specific": { 00:15:38.644 "raid": { 00:15:38.644 "uuid": "1eb0657a-cb87-4bc8-9993-b4aca4e3254f", 00:15:38.644 "strip_size_kb": 64, 00:15:38.644 "state": "online", 00:15:38.644 "raid_level": "raid5f", 00:15:38.644 "superblock": true, 00:15:38.644 "num_base_bdevs": 3, 00:15:38.644 "num_base_bdevs_discovered": 3, 00:15:38.644 "num_base_bdevs_operational": 3, 00:15:38.644 "base_bdevs_list": [ 00:15:38.644 { 00:15:38.644 "name": "pt1", 00:15:38.644 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:38.644 "is_configured": true, 00:15:38.644 "data_offset": 2048, 00:15:38.644 "data_size": 63488 00:15:38.644 }, 00:15:38.644 { 00:15:38.644 "name": "pt2", 00:15:38.644 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:38.644 "is_configured": true, 00:15:38.644 "data_offset": 2048, 00:15:38.644 "data_size": 63488 00:15:38.644 }, 00:15:38.644 { 00:15:38.644 "name": "pt3", 00:15:38.644 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:38.644 "is_configured": true, 00:15:38.644 "data_offset": 2048, 00:15:38.644 "data_size": 63488 00:15:38.644 } 00:15:38.644 ] 00:15:38.644 } 00:15:38.644 } 00:15:38.644 }' 00:15:38.644 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:38.903 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:38.903 pt2 00:15:38.903 pt3' 00:15:38.903 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.903 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:38.903 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:38.903 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:38.903 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.903 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.903 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.904 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.904 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:38.904 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:38.904 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:38.904 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:38.904 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.904 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.904 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.904 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.904 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:38.904 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:38.904 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:38.904 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:38.904 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.904 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.904 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.904 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.904 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:38.904 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:38.904 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:38.904 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.904 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.904 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:38.904 [2024-11-15 09:34:27.331515] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:38.904 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1eb0657a-cb87-4bc8-9993-b4aca4e3254f '!=' 1eb0657a-cb87-4bc8-9993-b4aca4e3254f ']' 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.164 [2024-11-15 09:34:27.375294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.164 "name": "raid_bdev1", 00:15:39.164 "uuid": "1eb0657a-cb87-4bc8-9993-b4aca4e3254f", 00:15:39.164 "strip_size_kb": 64, 00:15:39.164 "state": "online", 00:15:39.164 "raid_level": "raid5f", 00:15:39.164 "superblock": true, 00:15:39.164 "num_base_bdevs": 3, 00:15:39.164 "num_base_bdevs_discovered": 2, 00:15:39.164 "num_base_bdevs_operational": 2, 00:15:39.164 "base_bdevs_list": [ 00:15:39.164 { 00:15:39.164 "name": null, 00:15:39.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.164 "is_configured": false, 00:15:39.164 "data_offset": 0, 00:15:39.164 "data_size": 63488 00:15:39.164 }, 00:15:39.164 { 00:15:39.164 "name": "pt2", 00:15:39.164 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:39.164 "is_configured": true, 00:15:39.164 "data_offset": 2048, 00:15:39.164 "data_size": 63488 00:15:39.164 }, 00:15:39.164 { 00:15:39.164 "name": "pt3", 00:15:39.164 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:39.164 "is_configured": true, 00:15:39.164 "data_offset": 2048, 00:15:39.164 "data_size": 63488 00:15:39.164 } 00:15:39.164 ] 00:15:39.164 }' 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.164 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.425 [2024-11-15 09:34:27.806545] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:39.425 [2024-11-15 09:34:27.806583] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:39.425 [2024-11-15 09:34:27.806692] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.425 [2024-11-15 09:34:27.806770] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:39.425 [2024-11-15 09:34:27.806793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.425 [2024-11-15 09:34:27.874381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:39.425 [2024-11-15 09:34:27.874451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.425 [2024-11-15 09:34:27.874471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:39.425 [2024-11-15 09:34:27.874485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.425 [2024-11-15 09:34:27.876968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.425 [2024-11-15 09:34:27.877006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:39.425 [2024-11-15 09:34:27.877087] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:39.425 [2024-11-15 09:34:27.877159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:39.425 pt2 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.425 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.685 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.685 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.685 "name": "raid_bdev1", 00:15:39.685 "uuid": "1eb0657a-cb87-4bc8-9993-b4aca4e3254f", 00:15:39.685 "strip_size_kb": 64, 00:15:39.685 "state": "configuring", 00:15:39.685 "raid_level": "raid5f", 00:15:39.685 "superblock": true, 00:15:39.685 "num_base_bdevs": 3, 00:15:39.685 "num_base_bdevs_discovered": 1, 00:15:39.685 "num_base_bdevs_operational": 2, 00:15:39.685 "base_bdevs_list": [ 00:15:39.685 { 00:15:39.685 "name": null, 00:15:39.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.685 "is_configured": false, 00:15:39.685 "data_offset": 2048, 00:15:39.685 "data_size": 63488 00:15:39.685 }, 00:15:39.685 { 00:15:39.685 "name": "pt2", 00:15:39.685 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:39.685 "is_configured": true, 00:15:39.685 "data_offset": 2048, 00:15:39.685 "data_size": 63488 00:15:39.685 }, 00:15:39.685 { 00:15:39.685 "name": null, 00:15:39.685 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:39.685 "is_configured": false, 00:15:39.685 "data_offset": 2048, 00:15:39.685 "data_size": 63488 00:15:39.685 } 00:15:39.685 ] 00:15:39.685 }' 00:15:39.685 09:34:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.685 09:34:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.945 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:39.945 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:39.945 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:39.945 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:39.945 09:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.945 09:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.945 [2024-11-15 09:34:28.329709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:39.945 [2024-11-15 09:34:28.329793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.945 [2024-11-15 09:34:28.329818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:39.945 [2024-11-15 09:34:28.329831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.945 [2024-11-15 09:34:28.330393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.945 [2024-11-15 09:34:28.330434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:39.945 [2024-11-15 09:34:28.330527] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:39.945 [2024-11-15 09:34:28.330571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:39.945 [2024-11-15 09:34:28.330725] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:39.945 [2024-11-15 09:34:28.330746] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:39.945 [2024-11-15 09:34:28.331029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:39.945 [2024-11-15 09:34:28.336953] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:39.945 [2024-11-15 09:34:28.336997] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:39.945 [2024-11-15 09:34:28.337337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.945 pt3 00:15:39.945 09:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.945 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:39.945 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.945 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.945 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.945 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.945 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:39.945 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.945 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.945 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.945 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.945 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.945 09:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.945 09:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.945 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.945 09:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.945 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.945 "name": "raid_bdev1", 00:15:39.945 "uuid": "1eb0657a-cb87-4bc8-9993-b4aca4e3254f", 00:15:39.945 "strip_size_kb": 64, 00:15:39.945 "state": "online", 00:15:39.945 "raid_level": "raid5f", 00:15:39.945 "superblock": true, 00:15:39.945 "num_base_bdevs": 3, 00:15:39.945 "num_base_bdevs_discovered": 2, 00:15:39.945 "num_base_bdevs_operational": 2, 00:15:39.945 "base_bdevs_list": [ 00:15:39.945 { 00:15:39.945 "name": null, 00:15:39.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.945 "is_configured": false, 00:15:39.945 "data_offset": 2048, 00:15:39.945 "data_size": 63488 00:15:39.945 }, 00:15:39.945 { 00:15:39.945 "name": "pt2", 00:15:39.945 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:39.945 "is_configured": true, 00:15:39.945 "data_offset": 2048, 00:15:39.945 "data_size": 63488 00:15:39.945 }, 00:15:39.945 { 00:15:39.945 "name": "pt3", 00:15:39.945 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:39.945 "is_configured": true, 00:15:39.945 "data_offset": 2048, 00:15:39.945 "data_size": 63488 00:15:39.945 } 00:15:39.945 ] 00:15:39.945 }' 00:15:39.945 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.945 09:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.512 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:40.512 09:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.512 09:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.512 [2024-11-15 09:34:28.753533] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:40.512 [2024-11-15 09:34:28.753581] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:40.513 [2024-11-15 09:34:28.753674] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:40.513 [2024-11-15 09:34:28.753748] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:40.513 [2024-11-15 09:34:28.753765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.513 [2024-11-15 09:34:28.825438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:40.513 [2024-11-15 09:34:28.825517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.513 [2024-11-15 09:34:28.825538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:40.513 [2024-11-15 09:34:28.825549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.513 [2024-11-15 09:34:28.828045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.513 [2024-11-15 09:34:28.828089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:40.513 [2024-11-15 09:34:28.828186] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:40.513 [2024-11-15 09:34:28.828244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:40.513 [2024-11-15 09:34:28.828394] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:40.513 [2024-11-15 09:34:28.828413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:40.513 [2024-11-15 09:34:28.828435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:40.513 [2024-11-15 09:34:28.828513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:40.513 pt1 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.513 "name": "raid_bdev1", 00:15:40.513 "uuid": "1eb0657a-cb87-4bc8-9993-b4aca4e3254f", 00:15:40.513 "strip_size_kb": 64, 00:15:40.513 "state": "configuring", 00:15:40.513 "raid_level": "raid5f", 00:15:40.513 "superblock": true, 00:15:40.513 "num_base_bdevs": 3, 00:15:40.513 "num_base_bdevs_discovered": 1, 00:15:40.513 "num_base_bdevs_operational": 2, 00:15:40.513 "base_bdevs_list": [ 00:15:40.513 { 00:15:40.513 "name": null, 00:15:40.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.513 "is_configured": false, 00:15:40.513 "data_offset": 2048, 00:15:40.513 "data_size": 63488 00:15:40.513 }, 00:15:40.513 { 00:15:40.513 "name": "pt2", 00:15:40.513 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:40.513 "is_configured": true, 00:15:40.513 "data_offset": 2048, 00:15:40.513 "data_size": 63488 00:15:40.513 }, 00:15:40.513 { 00:15:40.513 "name": null, 00:15:40.513 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:40.513 "is_configured": false, 00:15:40.513 "data_offset": 2048, 00:15:40.513 "data_size": 63488 00:15:40.513 } 00:15:40.513 ] 00:15:40.513 }' 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.513 09:34:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.081 [2024-11-15 09:34:29.380570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:41.081 [2024-11-15 09:34:29.380662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.081 [2024-11-15 09:34:29.380693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:41.081 [2024-11-15 09:34:29.380705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.081 [2024-11-15 09:34:29.381351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.081 [2024-11-15 09:34:29.381384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:41.081 [2024-11-15 09:34:29.381499] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:41.081 [2024-11-15 09:34:29.381534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:41.081 [2024-11-15 09:34:29.381707] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:41.081 [2024-11-15 09:34:29.381721] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:41.081 [2024-11-15 09:34:29.382040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:41.081 [2024-11-15 09:34:29.388713] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:41.081 [2024-11-15 09:34:29.388750] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:41.081 [2024-11-15 09:34:29.389090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.081 pt3 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.081 09:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.081 "name": "raid_bdev1", 00:15:41.081 "uuid": "1eb0657a-cb87-4bc8-9993-b4aca4e3254f", 00:15:41.081 "strip_size_kb": 64, 00:15:41.081 "state": "online", 00:15:41.081 "raid_level": "raid5f", 00:15:41.081 "superblock": true, 00:15:41.081 "num_base_bdevs": 3, 00:15:41.081 "num_base_bdevs_discovered": 2, 00:15:41.081 "num_base_bdevs_operational": 2, 00:15:41.081 "base_bdevs_list": [ 00:15:41.081 { 00:15:41.081 "name": null, 00:15:41.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.082 "is_configured": false, 00:15:41.082 "data_offset": 2048, 00:15:41.082 "data_size": 63488 00:15:41.082 }, 00:15:41.082 { 00:15:41.082 "name": "pt2", 00:15:41.082 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:41.082 "is_configured": true, 00:15:41.082 "data_offset": 2048, 00:15:41.082 "data_size": 63488 00:15:41.082 }, 00:15:41.082 { 00:15:41.082 "name": "pt3", 00:15:41.082 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:41.082 "is_configured": true, 00:15:41.082 "data_offset": 2048, 00:15:41.082 "data_size": 63488 00:15:41.082 } 00:15:41.082 ] 00:15:41.082 }' 00:15:41.082 09:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.082 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.692 09:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:41.692 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.692 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.692 09:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:41.692 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.692 09:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:41.692 09:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:41.692 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.692 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.692 09:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:41.692 [2024-11-15 09:34:29.916721] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:41.692 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.692 09:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 1eb0657a-cb87-4bc8-9993-b4aca4e3254f '!=' 1eb0657a-cb87-4bc8-9993-b4aca4e3254f ']' 00:15:41.692 09:34:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81553 00:15:41.692 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 81553 ']' 00:15:41.692 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 81553 00:15:41.692 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:15:41.692 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:41.692 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81553 00:15:41.692 killing process with pid 81553 00:15:41.692 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:41.692 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:41.692 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81553' 00:15:41.692 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 81553 00:15:41.692 [2024-11-15 09:34:29.997281] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:41.692 [2024-11-15 09:34:29.997405] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.692 09:34:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 81553 00:15:41.692 [2024-11-15 09:34:29.997477] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.692 [2024-11-15 09:34:29.997491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:41.961 [2024-11-15 09:34:30.336976] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:43.337 09:34:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:43.337 00:15:43.337 real 0m8.033s 00:15:43.337 user 0m12.418s 00:15:43.337 sys 0m1.487s 00:15:43.337 09:34:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:43.337 09:34:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.337 ************************************ 00:15:43.337 END TEST raid5f_superblock_test 00:15:43.337 ************************************ 00:15:43.337 09:34:31 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:43.337 09:34:31 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:43.337 09:34:31 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:43.337 09:34:31 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:43.337 09:34:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:43.337 ************************************ 00:15:43.337 START TEST raid5f_rebuild_test 00:15:43.337 ************************************ 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 false false true 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=82006 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 82006 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 82006 ']' 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:43.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:43.337 09:34:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.337 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:43.337 Zero copy mechanism will not be used. 00:15:43.337 [2024-11-15 09:34:31.736820] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:15:43.337 [2024-11-15 09:34:31.736960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82006 ] 00:15:43.595 [2024-11-15 09:34:31.914387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.595 [2024-11-15 09:34:32.054124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.854 [2024-11-15 09:34:32.299332] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:43.854 [2024-11-15 09:34:32.299417] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.421 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:44.421 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:15:44.421 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:44.421 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:44.421 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.421 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.421 BaseBdev1_malloc 00:15:44.421 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.421 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:44.421 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.421 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.421 [2024-11-15 09:34:32.638700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:44.421 [2024-11-15 09:34:32.638776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.421 [2024-11-15 09:34:32.638802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:44.421 [2024-11-15 09:34:32.638815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.421 [2024-11-15 09:34:32.641393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.421 [2024-11-15 09:34:32.641432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:44.421 BaseBdev1 00:15:44.421 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.421 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:44.421 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:44.421 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.421 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.421 BaseBdev2_malloc 00:15:44.421 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.421 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:44.421 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.421 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.421 [2024-11-15 09:34:32.696544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:44.421 [2024-11-15 09:34:32.696629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.421 [2024-11-15 09:34:32.696653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:44.421 [2024-11-15 09:34:32.696667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.421 [2024-11-15 09:34:32.699161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.421 [2024-11-15 09:34:32.699194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:44.421 BaseBdev2 00:15:44.421 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.421 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.422 BaseBdev3_malloc 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.422 [2024-11-15 09:34:32.764570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:44.422 [2024-11-15 09:34:32.764637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.422 [2024-11-15 09:34:32.764661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:44.422 [2024-11-15 09:34:32.764674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.422 [2024-11-15 09:34:32.767168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.422 [2024-11-15 09:34:32.767208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:44.422 BaseBdev3 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.422 spare_malloc 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.422 spare_delay 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.422 [2024-11-15 09:34:32.832231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:44.422 [2024-11-15 09:34:32.832290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.422 [2024-11-15 09:34:32.832308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:44.422 [2024-11-15 09:34:32.832319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.422 [2024-11-15 09:34:32.834667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.422 [2024-11-15 09:34:32.834703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:44.422 spare 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.422 [2024-11-15 09:34:32.840283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:44.422 [2024-11-15 09:34:32.842293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:44.422 [2024-11-15 09:34:32.842358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:44.422 [2024-11-15 09:34:32.842444] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:44.422 [2024-11-15 09:34:32.842456] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:44.422 [2024-11-15 09:34:32.842719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:44.422 [2024-11-15 09:34:32.848725] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:44.422 [2024-11-15 09:34:32.848752] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:44.422 [2024-11-15 09:34:32.848968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.422 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.681 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.681 "name": "raid_bdev1", 00:15:44.681 "uuid": "a8254d18-1182-4411-ab95-5c5c59bf0ad2", 00:15:44.681 "strip_size_kb": 64, 00:15:44.681 "state": "online", 00:15:44.681 "raid_level": "raid5f", 00:15:44.681 "superblock": false, 00:15:44.681 "num_base_bdevs": 3, 00:15:44.681 "num_base_bdevs_discovered": 3, 00:15:44.681 "num_base_bdevs_operational": 3, 00:15:44.681 "base_bdevs_list": [ 00:15:44.681 { 00:15:44.681 "name": "BaseBdev1", 00:15:44.681 "uuid": "cbb9d773-0de7-584d-b9b4-e32678988842", 00:15:44.681 "is_configured": true, 00:15:44.681 "data_offset": 0, 00:15:44.681 "data_size": 65536 00:15:44.681 }, 00:15:44.681 { 00:15:44.681 "name": "BaseBdev2", 00:15:44.681 "uuid": "fba2ac73-2beb-5d5b-8f8f-d694ffa1ff87", 00:15:44.681 "is_configured": true, 00:15:44.681 "data_offset": 0, 00:15:44.681 "data_size": 65536 00:15:44.681 }, 00:15:44.681 { 00:15:44.681 "name": "BaseBdev3", 00:15:44.681 "uuid": "1418b860-df82-59ca-b39d-6e78749cf65e", 00:15:44.681 "is_configured": true, 00:15:44.681 "data_offset": 0, 00:15:44.681 "data_size": 65536 00:15:44.681 } 00:15:44.681 ] 00:15:44.681 }' 00:15:44.681 09:34:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.681 09:34:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.939 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:44.939 09:34:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.939 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:44.939 09:34:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.939 [2024-11-15 09:34:33.243935] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:44.939 09:34:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.939 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:44.939 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.939 09:34:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.939 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:44.939 09:34:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.939 09:34:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.939 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:44.939 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:44.939 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:44.939 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:44.939 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:44.939 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:44.939 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:44.939 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:44.939 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:44.939 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:44.939 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:44.939 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:44.939 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:44.939 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:45.199 [2024-11-15 09:34:33.543261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:45.199 /dev/nbd0 00:15:45.199 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:45.199 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:45.199 09:34:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:45.199 09:34:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:15:45.199 09:34:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:45.199 09:34:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:45.199 09:34:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:45.199 09:34:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:15:45.199 09:34:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:45.199 09:34:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:45.199 09:34:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:45.200 1+0 records in 00:15:45.200 1+0 records out 00:15:45.200 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323397 s, 12.7 MB/s 00:15:45.200 09:34:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.200 09:34:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:15:45.200 09:34:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.200 09:34:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:45.200 09:34:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:15:45.200 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:45.200 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:45.200 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:45.200 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:45.200 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:45.200 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:45.781 512+0 records in 00:15:45.781 512+0 records out 00:15:45.781 67108864 bytes (67 MB, 64 MiB) copied, 0.343615 s, 195 MB/s 00:15:45.781 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:45.781 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:45.781 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:45.781 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:45.781 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:45.781 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.781 09:34:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:46.055 09:34:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:46.055 [2024-11-15 09:34:34.245300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.055 09:34:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:46.055 09:34:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:46.055 09:34:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.055 09:34:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.055 09:34:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:46.055 09:34:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:46.055 09:34:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.055 09:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:46.055 09:34:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.055 09:34:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.055 [2024-11-15 09:34:34.254557] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:46.055 09:34:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.055 09:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:46.055 09:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.055 09:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.055 09:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.055 09:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.055 09:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:46.055 09:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.055 09:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.055 09:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.055 09:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.056 09:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.056 09:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.056 09:34:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.056 09:34:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.056 09:34:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.056 09:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.056 "name": "raid_bdev1", 00:15:46.056 "uuid": "a8254d18-1182-4411-ab95-5c5c59bf0ad2", 00:15:46.056 "strip_size_kb": 64, 00:15:46.056 "state": "online", 00:15:46.056 "raid_level": "raid5f", 00:15:46.056 "superblock": false, 00:15:46.056 "num_base_bdevs": 3, 00:15:46.056 "num_base_bdevs_discovered": 2, 00:15:46.056 "num_base_bdevs_operational": 2, 00:15:46.056 "base_bdevs_list": [ 00:15:46.056 { 00:15:46.056 "name": null, 00:15:46.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.056 "is_configured": false, 00:15:46.056 "data_offset": 0, 00:15:46.056 "data_size": 65536 00:15:46.056 }, 00:15:46.056 { 00:15:46.056 "name": "BaseBdev2", 00:15:46.056 "uuid": "fba2ac73-2beb-5d5b-8f8f-d694ffa1ff87", 00:15:46.056 "is_configured": true, 00:15:46.056 "data_offset": 0, 00:15:46.056 "data_size": 65536 00:15:46.056 }, 00:15:46.056 { 00:15:46.056 "name": "BaseBdev3", 00:15:46.056 "uuid": "1418b860-df82-59ca-b39d-6e78749cf65e", 00:15:46.056 "is_configured": true, 00:15:46.056 "data_offset": 0, 00:15:46.056 "data_size": 65536 00:15:46.056 } 00:15:46.056 ] 00:15:46.056 }' 00:15:46.056 09:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.056 09:34:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.315 09:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:46.315 09:34:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.315 09:34:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.315 [2024-11-15 09:34:34.641907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:46.315 [2024-11-15 09:34:34.659618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:46.315 09:34:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.315 09:34:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:46.315 [2024-11-15 09:34:34.667641] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:47.252 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.252 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.252 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.252 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.252 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.252 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.252 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.252 09:34:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.252 09:34:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.252 09:34:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.252 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.252 "name": "raid_bdev1", 00:15:47.252 "uuid": "a8254d18-1182-4411-ab95-5c5c59bf0ad2", 00:15:47.252 "strip_size_kb": 64, 00:15:47.252 "state": "online", 00:15:47.252 "raid_level": "raid5f", 00:15:47.252 "superblock": false, 00:15:47.252 "num_base_bdevs": 3, 00:15:47.252 "num_base_bdevs_discovered": 3, 00:15:47.252 "num_base_bdevs_operational": 3, 00:15:47.252 "process": { 00:15:47.252 "type": "rebuild", 00:15:47.252 "target": "spare", 00:15:47.252 "progress": { 00:15:47.252 "blocks": 20480, 00:15:47.252 "percent": 15 00:15:47.252 } 00:15:47.252 }, 00:15:47.252 "base_bdevs_list": [ 00:15:47.252 { 00:15:47.252 "name": "spare", 00:15:47.252 "uuid": "8699672b-bf47-5f9f-b9a6-f8bef748b8c2", 00:15:47.252 "is_configured": true, 00:15:47.252 "data_offset": 0, 00:15:47.252 "data_size": 65536 00:15:47.252 }, 00:15:47.252 { 00:15:47.252 "name": "BaseBdev2", 00:15:47.252 "uuid": "fba2ac73-2beb-5d5b-8f8f-d694ffa1ff87", 00:15:47.252 "is_configured": true, 00:15:47.252 "data_offset": 0, 00:15:47.252 "data_size": 65536 00:15:47.252 }, 00:15:47.252 { 00:15:47.252 "name": "BaseBdev3", 00:15:47.252 "uuid": "1418b860-df82-59ca-b39d-6e78749cf65e", 00:15:47.252 "is_configured": true, 00:15:47.252 "data_offset": 0, 00:15:47.252 "data_size": 65536 00:15:47.252 } 00:15:47.252 ] 00:15:47.252 }' 00:15:47.511 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.511 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.511 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.511 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.511 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:47.511 09:34:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.511 09:34:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.511 [2024-11-15 09:34:35.823623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:47.511 [2024-11-15 09:34:35.881374] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:47.511 [2024-11-15 09:34:35.881497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.511 [2024-11-15 09:34:35.881536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:47.511 [2024-11-15 09:34:35.881548] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:47.511 09:34:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.511 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:47.511 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.511 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.511 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.511 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.511 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:47.511 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.511 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.511 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.511 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.511 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.511 09:34:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.511 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.511 09:34:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.511 09:34:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.770 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.770 "name": "raid_bdev1", 00:15:47.770 "uuid": "a8254d18-1182-4411-ab95-5c5c59bf0ad2", 00:15:47.770 "strip_size_kb": 64, 00:15:47.770 "state": "online", 00:15:47.770 "raid_level": "raid5f", 00:15:47.770 "superblock": false, 00:15:47.770 "num_base_bdevs": 3, 00:15:47.770 "num_base_bdevs_discovered": 2, 00:15:47.770 "num_base_bdevs_operational": 2, 00:15:47.770 "base_bdevs_list": [ 00:15:47.770 { 00:15:47.770 "name": null, 00:15:47.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.770 "is_configured": false, 00:15:47.770 "data_offset": 0, 00:15:47.770 "data_size": 65536 00:15:47.770 }, 00:15:47.770 { 00:15:47.770 "name": "BaseBdev2", 00:15:47.770 "uuid": "fba2ac73-2beb-5d5b-8f8f-d694ffa1ff87", 00:15:47.770 "is_configured": true, 00:15:47.770 "data_offset": 0, 00:15:47.770 "data_size": 65536 00:15:47.770 }, 00:15:47.770 { 00:15:47.770 "name": "BaseBdev3", 00:15:47.770 "uuid": "1418b860-df82-59ca-b39d-6e78749cf65e", 00:15:47.770 "is_configured": true, 00:15:47.770 "data_offset": 0, 00:15:47.770 "data_size": 65536 00:15:47.770 } 00:15:47.770 ] 00:15:47.770 }' 00:15:47.770 09:34:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.771 09:34:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.029 09:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:48.029 09:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.029 09:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:48.029 09:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:48.029 09:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.029 09:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.029 09:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.029 09:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.029 09:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.029 09:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.029 09:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.029 "name": "raid_bdev1", 00:15:48.029 "uuid": "a8254d18-1182-4411-ab95-5c5c59bf0ad2", 00:15:48.029 "strip_size_kb": 64, 00:15:48.029 "state": "online", 00:15:48.029 "raid_level": "raid5f", 00:15:48.029 "superblock": false, 00:15:48.029 "num_base_bdevs": 3, 00:15:48.029 "num_base_bdevs_discovered": 2, 00:15:48.029 "num_base_bdevs_operational": 2, 00:15:48.029 "base_bdevs_list": [ 00:15:48.029 { 00:15:48.029 "name": null, 00:15:48.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.029 "is_configured": false, 00:15:48.029 "data_offset": 0, 00:15:48.029 "data_size": 65536 00:15:48.029 }, 00:15:48.029 { 00:15:48.029 "name": "BaseBdev2", 00:15:48.029 "uuid": "fba2ac73-2beb-5d5b-8f8f-d694ffa1ff87", 00:15:48.029 "is_configured": true, 00:15:48.029 "data_offset": 0, 00:15:48.029 "data_size": 65536 00:15:48.029 }, 00:15:48.029 { 00:15:48.029 "name": "BaseBdev3", 00:15:48.029 "uuid": "1418b860-df82-59ca-b39d-6e78749cf65e", 00:15:48.029 "is_configured": true, 00:15:48.029 "data_offset": 0, 00:15:48.029 "data_size": 65536 00:15:48.029 } 00:15:48.029 ] 00:15:48.029 }' 00:15:48.029 09:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.029 09:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:48.029 09:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.029 09:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:48.029 09:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:48.029 09:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.029 09:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.029 [2024-11-15 09:34:36.490112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:48.287 [2024-11-15 09:34:36.509054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:48.287 09:34:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.287 09:34:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:48.287 [2024-11-15 09:34:36.518389] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:49.225 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.225 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.225 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.225 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.225 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.225 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.225 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.225 09:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.225 09:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.225 09:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.225 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.226 "name": "raid_bdev1", 00:15:49.226 "uuid": "a8254d18-1182-4411-ab95-5c5c59bf0ad2", 00:15:49.226 "strip_size_kb": 64, 00:15:49.226 "state": "online", 00:15:49.226 "raid_level": "raid5f", 00:15:49.226 "superblock": false, 00:15:49.226 "num_base_bdevs": 3, 00:15:49.226 "num_base_bdevs_discovered": 3, 00:15:49.226 "num_base_bdevs_operational": 3, 00:15:49.226 "process": { 00:15:49.226 "type": "rebuild", 00:15:49.226 "target": "spare", 00:15:49.226 "progress": { 00:15:49.226 "blocks": 20480, 00:15:49.226 "percent": 15 00:15:49.226 } 00:15:49.226 }, 00:15:49.226 "base_bdevs_list": [ 00:15:49.226 { 00:15:49.226 "name": "spare", 00:15:49.226 "uuid": "8699672b-bf47-5f9f-b9a6-f8bef748b8c2", 00:15:49.226 "is_configured": true, 00:15:49.226 "data_offset": 0, 00:15:49.226 "data_size": 65536 00:15:49.226 }, 00:15:49.226 { 00:15:49.226 "name": "BaseBdev2", 00:15:49.226 "uuid": "fba2ac73-2beb-5d5b-8f8f-d694ffa1ff87", 00:15:49.226 "is_configured": true, 00:15:49.226 "data_offset": 0, 00:15:49.226 "data_size": 65536 00:15:49.226 }, 00:15:49.226 { 00:15:49.226 "name": "BaseBdev3", 00:15:49.226 "uuid": "1418b860-df82-59ca-b39d-6e78749cf65e", 00:15:49.226 "is_configured": true, 00:15:49.226 "data_offset": 0, 00:15:49.226 "data_size": 65536 00:15:49.226 } 00:15:49.226 ] 00:15:49.226 }' 00:15:49.226 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.226 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:49.226 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.226 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:49.226 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:49.226 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:49.226 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:49.226 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=571 00:15:49.226 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:49.226 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.226 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.226 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.226 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.226 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.226 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.226 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.226 09:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.226 09:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.226 09:34:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.226 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.226 "name": "raid_bdev1", 00:15:49.226 "uuid": "a8254d18-1182-4411-ab95-5c5c59bf0ad2", 00:15:49.226 "strip_size_kb": 64, 00:15:49.226 "state": "online", 00:15:49.226 "raid_level": "raid5f", 00:15:49.226 "superblock": false, 00:15:49.226 "num_base_bdevs": 3, 00:15:49.226 "num_base_bdevs_discovered": 3, 00:15:49.226 "num_base_bdevs_operational": 3, 00:15:49.226 "process": { 00:15:49.226 "type": "rebuild", 00:15:49.226 "target": "spare", 00:15:49.226 "progress": { 00:15:49.226 "blocks": 22528, 00:15:49.226 "percent": 17 00:15:49.226 } 00:15:49.226 }, 00:15:49.226 "base_bdevs_list": [ 00:15:49.226 { 00:15:49.226 "name": "spare", 00:15:49.226 "uuid": "8699672b-bf47-5f9f-b9a6-f8bef748b8c2", 00:15:49.226 "is_configured": true, 00:15:49.226 "data_offset": 0, 00:15:49.226 "data_size": 65536 00:15:49.226 }, 00:15:49.226 { 00:15:49.226 "name": "BaseBdev2", 00:15:49.226 "uuid": "fba2ac73-2beb-5d5b-8f8f-d694ffa1ff87", 00:15:49.226 "is_configured": true, 00:15:49.226 "data_offset": 0, 00:15:49.226 "data_size": 65536 00:15:49.226 }, 00:15:49.226 { 00:15:49.226 "name": "BaseBdev3", 00:15:49.226 "uuid": "1418b860-df82-59ca-b39d-6e78749cf65e", 00:15:49.226 "is_configured": true, 00:15:49.226 "data_offset": 0, 00:15:49.226 "data_size": 65536 00:15:49.226 } 00:15:49.226 ] 00:15:49.226 }' 00:15:49.485 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.485 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:49.485 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.485 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:49.485 09:34:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:50.421 09:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:50.421 09:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:50.421 09:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.421 09:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:50.421 09:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:50.421 09:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.421 09:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.421 09:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.421 09:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.421 09:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.421 09:34:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.421 09:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.421 "name": "raid_bdev1", 00:15:50.421 "uuid": "a8254d18-1182-4411-ab95-5c5c59bf0ad2", 00:15:50.421 "strip_size_kb": 64, 00:15:50.421 "state": "online", 00:15:50.421 "raid_level": "raid5f", 00:15:50.421 "superblock": false, 00:15:50.421 "num_base_bdevs": 3, 00:15:50.421 "num_base_bdevs_discovered": 3, 00:15:50.421 "num_base_bdevs_operational": 3, 00:15:50.421 "process": { 00:15:50.421 "type": "rebuild", 00:15:50.421 "target": "spare", 00:15:50.421 "progress": { 00:15:50.421 "blocks": 45056, 00:15:50.421 "percent": 34 00:15:50.421 } 00:15:50.421 }, 00:15:50.421 "base_bdevs_list": [ 00:15:50.421 { 00:15:50.421 "name": "spare", 00:15:50.421 "uuid": "8699672b-bf47-5f9f-b9a6-f8bef748b8c2", 00:15:50.421 "is_configured": true, 00:15:50.421 "data_offset": 0, 00:15:50.421 "data_size": 65536 00:15:50.421 }, 00:15:50.421 { 00:15:50.421 "name": "BaseBdev2", 00:15:50.421 "uuid": "fba2ac73-2beb-5d5b-8f8f-d694ffa1ff87", 00:15:50.421 "is_configured": true, 00:15:50.421 "data_offset": 0, 00:15:50.421 "data_size": 65536 00:15:50.421 }, 00:15:50.421 { 00:15:50.421 "name": "BaseBdev3", 00:15:50.421 "uuid": "1418b860-df82-59ca-b39d-6e78749cf65e", 00:15:50.421 "is_configured": true, 00:15:50.421 "data_offset": 0, 00:15:50.421 "data_size": 65536 00:15:50.421 } 00:15:50.421 ] 00:15:50.421 }' 00:15:50.421 09:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.680 09:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:50.680 09:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.680 09:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.680 09:34:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:51.616 09:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:51.616 09:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:51.616 09:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.616 09:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:51.616 09:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:51.616 09:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.616 09:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.616 09:34:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.616 09:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.616 09:34:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.616 09:34:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.616 09:34:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.616 "name": "raid_bdev1", 00:15:51.616 "uuid": "a8254d18-1182-4411-ab95-5c5c59bf0ad2", 00:15:51.616 "strip_size_kb": 64, 00:15:51.616 "state": "online", 00:15:51.616 "raid_level": "raid5f", 00:15:51.616 "superblock": false, 00:15:51.616 "num_base_bdevs": 3, 00:15:51.616 "num_base_bdevs_discovered": 3, 00:15:51.616 "num_base_bdevs_operational": 3, 00:15:51.616 "process": { 00:15:51.616 "type": "rebuild", 00:15:51.616 "target": "spare", 00:15:51.616 "progress": { 00:15:51.616 "blocks": 69632, 00:15:51.616 "percent": 53 00:15:51.616 } 00:15:51.616 }, 00:15:51.616 "base_bdevs_list": [ 00:15:51.616 { 00:15:51.616 "name": "spare", 00:15:51.616 "uuid": "8699672b-bf47-5f9f-b9a6-f8bef748b8c2", 00:15:51.616 "is_configured": true, 00:15:51.616 "data_offset": 0, 00:15:51.616 "data_size": 65536 00:15:51.616 }, 00:15:51.616 { 00:15:51.616 "name": "BaseBdev2", 00:15:51.616 "uuid": "fba2ac73-2beb-5d5b-8f8f-d694ffa1ff87", 00:15:51.616 "is_configured": true, 00:15:51.616 "data_offset": 0, 00:15:51.616 "data_size": 65536 00:15:51.616 }, 00:15:51.616 { 00:15:51.616 "name": "BaseBdev3", 00:15:51.616 "uuid": "1418b860-df82-59ca-b39d-6e78749cf65e", 00:15:51.616 "is_configured": true, 00:15:51.616 "data_offset": 0, 00:15:51.616 "data_size": 65536 00:15:51.616 } 00:15:51.616 ] 00:15:51.616 }' 00:15:51.616 09:34:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.616 09:34:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:51.616 09:34:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.876 09:34:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:51.876 09:34:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:52.813 09:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:52.813 09:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:52.813 09:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.813 09:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:52.813 09:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:52.813 09:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.813 09:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.813 09:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.813 09:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.813 09:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.813 09:34:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.813 09:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.813 "name": "raid_bdev1", 00:15:52.813 "uuid": "a8254d18-1182-4411-ab95-5c5c59bf0ad2", 00:15:52.813 "strip_size_kb": 64, 00:15:52.813 "state": "online", 00:15:52.813 "raid_level": "raid5f", 00:15:52.813 "superblock": false, 00:15:52.813 "num_base_bdevs": 3, 00:15:52.813 "num_base_bdevs_discovered": 3, 00:15:52.813 "num_base_bdevs_operational": 3, 00:15:52.813 "process": { 00:15:52.813 "type": "rebuild", 00:15:52.813 "target": "spare", 00:15:52.813 "progress": { 00:15:52.813 "blocks": 92160, 00:15:52.813 "percent": 70 00:15:52.813 } 00:15:52.813 }, 00:15:52.813 "base_bdevs_list": [ 00:15:52.813 { 00:15:52.813 "name": "spare", 00:15:52.813 "uuid": "8699672b-bf47-5f9f-b9a6-f8bef748b8c2", 00:15:52.813 "is_configured": true, 00:15:52.813 "data_offset": 0, 00:15:52.813 "data_size": 65536 00:15:52.813 }, 00:15:52.813 { 00:15:52.813 "name": "BaseBdev2", 00:15:52.813 "uuid": "fba2ac73-2beb-5d5b-8f8f-d694ffa1ff87", 00:15:52.813 "is_configured": true, 00:15:52.813 "data_offset": 0, 00:15:52.813 "data_size": 65536 00:15:52.813 }, 00:15:52.813 { 00:15:52.813 "name": "BaseBdev3", 00:15:52.813 "uuid": "1418b860-df82-59ca-b39d-6e78749cf65e", 00:15:52.813 "is_configured": true, 00:15:52.813 "data_offset": 0, 00:15:52.813 "data_size": 65536 00:15:52.813 } 00:15:52.813 ] 00:15:52.813 }' 00:15:52.813 09:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.813 09:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:52.813 09:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.813 09:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:52.813 09:34:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:54.191 09:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:54.191 09:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.191 09:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.191 09:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.191 09:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.191 09:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.191 09:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.191 09:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.191 09:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.191 09:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.191 09:34:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.191 09:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.191 "name": "raid_bdev1", 00:15:54.191 "uuid": "a8254d18-1182-4411-ab95-5c5c59bf0ad2", 00:15:54.191 "strip_size_kb": 64, 00:15:54.191 "state": "online", 00:15:54.191 "raid_level": "raid5f", 00:15:54.191 "superblock": false, 00:15:54.191 "num_base_bdevs": 3, 00:15:54.191 "num_base_bdevs_discovered": 3, 00:15:54.191 "num_base_bdevs_operational": 3, 00:15:54.191 "process": { 00:15:54.191 "type": "rebuild", 00:15:54.191 "target": "spare", 00:15:54.191 "progress": { 00:15:54.191 "blocks": 114688, 00:15:54.191 "percent": 87 00:15:54.191 } 00:15:54.191 }, 00:15:54.191 "base_bdevs_list": [ 00:15:54.191 { 00:15:54.191 "name": "spare", 00:15:54.191 "uuid": "8699672b-bf47-5f9f-b9a6-f8bef748b8c2", 00:15:54.191 "is_configured": true, 00:15:54.191 "data_offset": 0, 00:15:54.191 "data_size": 65536 00:15:54.191 }, 00:15:54.191 { 00:15:54.191 "name": "BaseBdev2", 00:15:54.191 "uuid": "fba2ac73-2beb-5d5b-8f8f-d694ffa1ff87", 00:15:54.191 "is_configured": true, 00:15:54.191 "data_offset": 0, 00:15:54.191 "data_size": 65536 00:15:54.191 }, 00:15:54.191 { 00:15:54.191 "name": "BaseBdev3", 00:15:54.191 "uuid": "1418b860-df82-59ca-b39d-6e78749cf65e", 00:15:54.191 "is_configured": true, 00:15:54.191 "data_offset": 0, 00:15:54.191 "data_size": 65536 00:15:54.191 } 00:15:54.191 ] 00:15:54.191 }' 00:15:54.191 09:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.191 09:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:54.191 09:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.191 09:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:54.191 09:34:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:54.759 [2024-11-15 09:34:42.984488] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:54.759 [2024-11-15 09:34:42.984625] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:54.759 [2024-11-15 09:34:42.984682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.017 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:55.017 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.017 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.017 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.017 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.017 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.017 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.017 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.017 09:34:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.017 09:34:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.017 09:34:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.017 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.017 "name": "raid_bdev1", 00:15:55.018 "uuid": "a8254d18-1182-4411-ab95-5c5c59bf0ad2", 00:15:55.018 "strip_size_kb": 64, 00:15:55.018 "state": "online", 00:15:55.018 "raid_level": "raid5f", 00:15:55.018 "superblock": false, 00:15:55.018 "num_base_bdevs": 3, 00:15:55.018 "num_base_bdevs_discovered": 3, 00:15:55.018 "num_base_bdevs_operational": 3, 00:15:55.018 "base_bdevs_list": [ 00:15:55.018 { 00:15:55.018 "name": "spare", 00:15:55.018 "uuid": "8699672b-bf47-5f9f-b9a6-f8bef748b8c2", 00:15:55.018 "is_configured": true, 00:15:55.018 "data_offset": 0, 00:15:55.018 "data_size": 65536 00:15:55.018 }, 00:15:55.018 { 00:15:55.018 "name": "BaseBdev2", 00:15:55.018 "uuid": "fba2ac73-2beb-5d5b-8f8f-d694ffa1ff87", 00:15:55.018 "is_configured": true, 00:15:55.018 "data_offset": 0, 00:15:55.018 "data_size": 65536 00:15:55.018 }, 00:15:55.018 { 00:15:55.018 "name": "BaseBdev3", 00:15:55.018 "uuid": "1418b860-df82-59ca-b39d-6e78749cf65e", 00:15:55.018 "is_configured": true, 00:15:55.018 "data_offset": 0, 00:15:55.018 "data_size": 65536 00:15:55.018 } 00:15:55.018 ] 00:15:55.018 }' 00:15:55.018 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.018 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:55.018 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.277 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:55.277 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:55.277 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:55.277 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.277 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:55.277 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:55.277 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.277 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.277 09:34:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.277 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.277 09:34:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.277 09:34:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.277 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.277 "name": "raid_bdev1", 00:15:55.277 "uuid": "a8254d18-1182-4411-ab95-5c5c59bf0ad2", 00:15:55.277 "strip_size_kb": 64, 00:15:55.277 "state": "online", 00:15:55.277 "raid_level": "raid5f", 00:15:55.277 "superblock": false, 00:15:55.277 "num_base_bdevs": 3, 00:15:55.277 "num_base_bdevs_discovered": 3, 00:15:55.277 "num_base_bdevs_operational": 3, 00:15:55.277 "base_bdevs_list": [ 00:15:55.277 { 00:15:55.277 "name": "spare", 00:15:55.277 "uuid": "8699672b-bf47-5f9f-b9a6-f8bef748b8c2", 00:15:55.277 "is_configured": true, 00:15:55.277 "data_offset": 0, 00:15:55.277 "data_size": 65536 00:15:55.277 }, 00:15:55.277 { 00:15:55.277 "name": "BaseBdev2", 00:15:55.277 "uuid": "fba2ac73-2beb-5d5b-8f8f-d694ffa1ff87", 00:15:55.277 "is_configured": true, 00:15:55.277 "data_offset": 0, 00:15:55.277 "data_size": 65536 00:15:55.277 }, 00:15:55.277 { 00:15:55.277 "name": "BaseBdev3", 00:15:55.277 "uuid": "1418b860-df82-59ca-b39d-6e78749cf65e", 00:15:55.277 "is_configured": true, 00:15:55.277 "data_offset": 0, 00:15:55.277 "data_size": 65536 00:15:55.277 } 00:15:55.277 ] 00:15:55.277 }' 00:15:55.277 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.277 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:55.278 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.278 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:55.278 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:55.278 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.278 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.278 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.278 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.278 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.278 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.278 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.278 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.278 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.278 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.278 09:34:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.278 09:34:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.278 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.278 09:34:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.278 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.278 "name": "raid_bdev1", 00:15:55.278 "uuid": "a8254d18-1182-4411-ab95-5c5c59bf0ad2", 00:15:55.278 "strip_size_kb": 64, 00:15:55.278 "state": "online", 00:15:55.278 "raid_level": "raid5f", 00:15:55.278 "superblock": false, 00:15:55.278 "num_base_bdevs": 3, 00:15:55.278 "num_base_bdevs_discovered": 3, 00:15:55.278 "num_base_bdevs_operational": 3, 00:15:55.278 "base_bdevs_list": [ 00:15:55.278 { 00:15:55.278 "name": "spare", 00:15:55.278 "uuid": "8699672b-bf47-5f9f-b9a6-f8bef748b8c2", 00:15:55.278 "is_configured": true, 00:15:55.278 "data_offset": 0, 00:15:55.278 "data_size": 65536 00:15:55.278 }, 00:15:55.278 { 00:15:55.278 "name": "BaseBdev2", 00:15:55.278 "uuid": "fba2ac73-2beb-5d5b-8f8f-d694ffa1ff87", 00:15:55.278 "is_configured": true, 00:15:55.278 "data_offset": 0, 00:15:55.278 "data_size": 65536 00:15:55.278 }, 00:15:55.278 { 00:15:55.278 "name": "BaseBdev3", 00:15:55.278 "uuid": "1418b860-df82-59ca-b39d-6e78749cf65e", 00:15:55.278 "is_configured": true, 00:15:55.278 "data_offset": 0, 00:15:55.278 "data_size": 65536 00:15:55.278 } 00:15:55.278 ] 00:15:55.278 }' 00:15:55.278 09:34:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.278 09:34:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.902 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:55.902 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.902 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.902 [2024-11-15 09:34:44.119790] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:55.902 [2024-11-15 09:34:44.119839] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:55.902 [2024-11-15 09:34:44.119984] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:55.902 [2024-11-15 09:34:44.120091] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:55.902 [2024-11-15 09:34:44.120113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:55.902 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.902 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.902 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:55.902 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.902 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.902 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.902 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:55.902 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:55.903 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:55.903 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:55.903 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:55.903 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:55.903 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:55.903 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:55.903 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:55.903 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:55.903 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:55.903 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:55.903 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:56.161 /dev/nbd0 00:15:56.161 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:56.161 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:56.161 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:56.161 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:15:56.161 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:56.161 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:56.161 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:56.161 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:15:56.161 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:56.161 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:56.161 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:56.161 1+0 records in 00:15:56.161 1+0 records out 00:15:56.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418391 s, 9.8 MB/s 00:15:56.161 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.161 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:15:56.161 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.161 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:56.161 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:15:56.161 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:56.161 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:56.161 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:56.420 /dev/nbd1 00:15:56.420 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:56.420 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:56.420 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:56.420 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:15:56.420 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:56.420 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:56.420 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:56.420 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:15:56.420 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:56.420 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:56.420 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:56.420 1+0 records in 00:15:56.420 1+0 records out 00:15:56.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372235 s, 11.0 MB/s 00:15:56.420 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.420 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:15:56.420 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.420 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:56.420 09:34:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:15:56.420 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:56.420 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:56.420 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:56.678 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:56.678 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:56.678 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:56.678 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:56.678 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:56.678 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:56.679 09:34:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:56.936 09:34:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:56.936 09:34:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:56.936 09:34:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:56.936 09:34:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:56.936 09:34:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:56.936 09:34:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:56.937 09:34:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:56.937 09:34:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:56.937 09:34:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:56.937 09:34:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:56.937 09:34:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:56.937 09:34:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:57.196 09:34:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:57.196 09:34:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:57.196 09:34:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:57.196 09:34:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:57.196 09:34:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:57.196 09:34:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:57.196 09:34:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:57.196 09:34:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 82006 00:15:57.196 09:34:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 82006 ']' 00:15:57.196 09:34:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 82006 00:15:57.196 09:34:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:15:57.196 09:34:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:57.196 09:34:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82006 00:15:57.196 09:34:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:57.196 09:34:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:57.196 09:34:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82006' 00:15:57.196 killing process with pid 82006 00:15:57.196 Received shutdown signal, test time was about 60.000000 seconds 00:15:57.196 00:15:57.196 Latency(us) 00:15:57.196 [2024-11-15T09:34:45.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.196 [2024-11-15T09:34:45.659Z] =================================================================================================================== 00:15:57.196 [2024-11-15T09:34:45.659Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:57.196 09:34:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 82006 00:15:57.196 [2024-11-15 09:34:45.456541] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:57.196 09:34:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 82006 00:15:57.455 [2024-11-15 09:34:45.895155] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:58.841 00:15:58.841 real 0m15.432s 00:15:58.841 user 0m18.836s 00:15:58.841 sys 0m2.053s 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:58.841 ************************************ 00:15:58.841 END TEST raid5f_rebuild_test 00:15:58.841 ************************************ 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.841 09:34:47 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:58.841 09:34:47 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:58.841 09:34:47 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:58.841 09:34:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:58.841 ************************************ 00:15:58.841 START TEST raid5f_rebuild_test_sb 00:15:58.841 ************************************ 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 true false true 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82446 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82446 00:15:58.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 82446 ']' 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:58.841 09:34:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.841 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:58.841 Zero copy mechanism will not be used. 00:15:58.841 [2024-11-15 09:34:47.266402] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:15:58.841 [2024-11-15 09:34:47.266548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82446 ] 00:15:59.112 [2024-11-15 09:34:47.448274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.371 [2024-11-15 09:34:47.584564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.371 [2024-11-15 09:34:47.826220] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.371 [2024-11-15 09:34:47.826306] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.939 BaseBdev1_malloc 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.939 [2024-11-15 09:34:48.155727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:59.939 [2024-11-15 09:34:48.155802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.939 [2024-11-15 09:34:48.155828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:59.939 [2024-11-15 09:34:48.155841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.939 [2024-11-15 09:34:48.158395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.939 [2024-11-15 09:34:48.158436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:59.939 BaseBdev1 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.939 BaseBdev2_malloc 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.939 [2024-11-15 09:34:48.219918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:59.939 [2024-11-15 09:34:48.220019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.939 [2024-11-15 09:34:48.220042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:59.939 [2024-11-15 09:34:48.220055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.939 [2024-11-15 09:34:48.222693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.939 [2024-11-15 09:34:48.222816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:59.939 BaseBdev2 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.939 BaseBdev3_malloc 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.939 [2024-11-15 09:34:48.297018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:59.939 [2024-11-15 09:34:48.297082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.939 [2024-11-15 09:34:48.297107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:59.939 [2024-11-15 09:34:48.297119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.939 [2024-11-15 09:34:48.299583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.939 [2024-11-15 09:34:48.299639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:59.939 BaseBdev3 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.939 spare_malloc 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.939 spare_delay 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.939 [2024-11-15 09:34:48.372361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:59.939 [2024-11-15 09:34:48.372497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.939 [2024-11-15 09:34:48.372528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:59.939 [2024-11-15 09:34:48.372561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.939 [2024-11-15 09:34:48.375399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.939 [2024-11-15 09:34:48.375443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:59.939 spare 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.939 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.939 [2024-11-15 09:34:48.384427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.939 [2024-11-15 09:34:48.386729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.939 [2024-11-15 09:34:48.386863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:59.940 [2024-11-15 09:34:48.387096] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:59.940 [2024-11-15 09:34:48.387114] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:59.940 [2024-11-15 09:34:48.387434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:59.940 [2024-11-15 09:34:48.393772] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:59.940 [2024-11-15 09:34:48.393832] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:59.940 [2024-11-15 09:34:48.394134] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.940 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.940 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:59.940 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.940 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.940 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.940 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.940 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:59.940 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.940 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.940 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.940 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.198 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.198 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.198 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.198 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.198 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.198 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.198 "name": "raid_bdev1", 00:16:00.198 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:00.198 "strip_size_kb": 64, 00:16:00.198 "state": "online", 00:16:00.198 "raid_level": "raid5f", 00:16:00.198 "superblock": true, 00:16:00.198 "num_base_bdevs": 3, 00:16:00.198 "num_base_bdevs_discovered": 3, 00:16:00.198 "num_base_bdevs_operational": 3, 00:16:00.198 "base_bdevs_list": [ 00:16:00.198 { 00:16:00.198 "name": "BaseBdev1", 00:16:00.198 "uuid": "7818dadc-12ac-5110-b99a-e5660cde144d", 00:16:00.198 "is_configured": true, 00:16:00.198 "data_offset": 2048, 00:16:00.198 "data_size": 63488 00:16:00.198 }, 00:16:00.198 { 00:16:00.198 "name": "BaseBdev2", 00:16:00.198 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:00.198 "is_configured": true, 00:16:00.198 "data_offset": 2048, 00:16:00.198 "data_size": 63488 00:16:00.198 }, 00:16:00.198 { 00:16:00.198 "name": "BaseBdev3", 00:16:00.198 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:00.198 "is_configured": true, 00:16:00.198 "data_offset": 2048, 00:16:00.198 "data_size": 63488 00:16:00.198 } 00:16:00.198 ] 00:16:00.198 }' 00:16:00.198 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.198 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.456 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:00.456 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:00.456 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.456 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.456 [2024-11-15 09:34:48.861813] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.456 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.456 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:16:00.456 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.456 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:00.456 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.456 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.456 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.713 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:00.713 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:00.713 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:00.713 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:00.713 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:00.713 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:00.713 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:00.713 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:00.713 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:00.713 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:00.713 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:00.713 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:00.713 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:00.713 09:34:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:00.713 [2024-11-15 09:34:49.133152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:00.713 /dev/nbd0 00:16:00.713 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:00.971 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:00.971 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:00.971 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:16:00.971 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:00.971 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:00.971 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:00.971 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:16:00.971 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:00.971 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:00.971 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.971 1+0 records in 00:16:00.971 1+0 records out 00:16:00.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577904 s, 7.1 MB/s 00:16:00.971 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.971 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:16:00.971 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.971 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:00.971 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:16:00.971 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.971 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:00.971 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:00.971 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:00.971 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:00.971 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:16:01.229 496+0 records in 00:16:01.229 496+0 records out 00:16:01.229 65011712 bytes (65 MB, 62 MiB) copied, 0.45495 s, 143 MB/s 00:16:01.229 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:01.229 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:01.229 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:01.229 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:01.229 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:01.229 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:01.229 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:01.490 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:01.490 [2024-11-15 09:34:49.893158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.490 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:01.490 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:01.490 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:01.490 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:01.490 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:01.490 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:01.490 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:01.490 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:01.490 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.490 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.490 [2024-11-15 09:34:49.910374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:01.490 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.490 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:01.490 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.490 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.490 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.490 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.491 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:01.491 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.491 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.491 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.491 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.491 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.491 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.491 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.491 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.491 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.750 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.750 "name": "raid_bdev1", 00:16:01.750 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:01.750 "strip_size_kb": 64, 00:16:01.750 "state": "online", 00:16:01.750 "raid_level": "raid5f", 00:16:01.750 "superblock": true, 00:16:01.750 "num_base_bdevs": 3, 00:16:01.750 "num_base_bdevs_discovered": 2, 00:16:01.750 "num_base_bdevs_operational": 2, 00:16:01.750 "base_bdevs_list": [ 00:16:01.750 { 00:16:01.750 "name": null, 00:16:01.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.750 "is_configured": false, 00:16:01.750 "data_offset": 0, 00:16:01.750 "data_size": 63488 00:16:01.750 }, 00:16:01.750 { 00:16:01.750 "name": "BaseBdev2", 00:16:01.750 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:01.750 "is_configured": true, 00:16:01.750 "data_offset": 2048, 00:16:01.750 "data_size": 63488 00:16:01.750 }, 00:16:01.750 { 00:16:01.750 "name": "BaseBdev3", 00:16:01.750 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:01.750 "is_configured": true, 00:16:01.750 "data_offset": 2048, 00:16:01.750 "data_size": 63488 00:16:01.750 } 00:16:01.750 ] 00:16:01.750 }' 00:16:01.750 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.750 09:34:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.009 09:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:02.009 09:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.009 09:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.009 [2024-11-15 09:34:50.329694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:02.009 [2024-11-15 09:34:50.348591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:16:02.009 09:34:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.009 09:34:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:02.009 [2024-11-15 09:34:50.358043] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:02.955 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.955 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.955 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.955 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.955 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.955 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.955 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.955 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.955 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.955 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.955 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.955 "name": "raid_bdev1", 00:16:02.955 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:02.955 "strip_size_kb": 64, 00:16:02.955 "state": "online", 00:16:02.955 "raid_level": "raid5f", 00:16:02.955 "superblock": true, 00:16:02.955 "num_base_bdevs": 3, 00:16:02.955 "num_base_bdevs_discovered": 3, 00:16:02.955 "num_base_bdevs_operational": 3, 00:16:02.955 "process": { 00:16:02.955 "type": "rebuild", 00:16:02.955 "target": "spare", 00:16:02.955 "progress": { 00:16:02.955 "blocks": 20480, 00:16:02.955 "percent": 16 00:16:02.955 } 00:16:02.955 }, 00:16:02.955 "base_bdevs_list": [ 00:16:02.955 { 00:16:02.955 "name": "spare", 00:16:02.955 "uuid": "53876ce4-ff39-5b8c-b51b-722e11018c05", 00:16:02.955 "is_configured": true, 00:16:02.955 "data_offset": 2048, 00:16:02.955 "data_size": 63488 00:16:02.955 }, 00:16:02.955 { 00:16:02.955 "name": "BaseBdev2", 00:16:02.955 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:02.955 "is_configured": true, 00:16:02.955 "data_offset": 2048, 00:16:02.955 "data_size": 63488 00:16:02.955 }, 00:16:02.955 { 00:16:02.955 "name": "BaseBdev3", 00:16:02.955 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:02.955 "is_configured": true, 00:16:02.955 "data_offset": 2048, 00:16:02.955 "data_size": 63488 00:16:02.955 } 00:16:02.955 ] 00:16:02.955 }' 00:16:02.955 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.215 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.215 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.215 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.215 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:03.215 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.215 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.215 [2024-11-15 09:34:51.510004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:03.215 [2024-11-15 09:34:51.571740] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:03.215 [2024-11-15 09:34:51.571831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.215 [2024-11-15 09:34:51.571866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:03.215 [2024-11-15 09:34:51.571878] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:03.215 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.215 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:03.215 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.215 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.215 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.215 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.215 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:03.215 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.215 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.215 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.215 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.215 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.215 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.215 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.215 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.215 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.215 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.215 "name": "raid_bdev1", 00:16:03.215 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:03.215 "strip_size_kb": 64, 00:16:03.215 "state": "online", 00:16:03.215 "raid_level": "raid5f", 00:16:03.215 "superblock": true, 00:16:03.215 "num_base_bdevs": 3, 00:16:03.215 "num_base_bdevs_discovered": 2, 00:16:03.215 "num_base_bdevs_operational": 2, 00:16:03.215 "base_bdevs_list": [ 00:16:03.215 { 00:16:03.215 "name": null, 00:16:03.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.216 "is_configured": false, 00:16:03.216 "data_offset": 0, 00:16:03.216 "data_size": 63488 00:16:03.216 }, 00:16:03.216 { 00:16:03.216 "name": "BaseBdev2", 00:16:03.216 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:03.216 "is_configured": true, 00:16:03.216 "data_offset": 2048, 00:16:03.216 "data_size": 63488 00:16:03.216 }, 00:16:03.216 { 00:16:03.216 "name": "BaseBdev3", 00:16:03.216 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:03.216 "is_configured": true, 00:16:03.216 "data_offset": 2048, 00:16:03.216 "data_size": 63488 00:16:03.216 } 00:16:03.216 ] 00:16:03.216 }' 00:16:03.216 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.216 09:34:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.784 09:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:03.784 09:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.784 09:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:03.784 09:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:03.784 09:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.784 09:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.784 09:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.784 09:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.784 09:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.784 09:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.784 09:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.784 "name": "raid_bdev1", 00:16:03.784 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:03.784 "strip_size_kb": 64, 00:16:03.784 "state": "online", 00:16:03.784 "raid_level": "raid5f", 00:16:03.784 "superblock": true, 00:16:03.784 "num_base_bdevs": 3, 00:16:03.784 "num_base_bdevs_discovered": 2, 00:16:03.784 "num_base_bdevs_operational": 2, 00:16:03.784 "base_bdevs_list": [ 00:16:03.784 { 00:16:03.784 "name": null, 00:16:03.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.784 "is_configured": false, 00:16:03.784 "data_offset": 0, 00:16:03.784 "data_size": 63488 00:16:03.784 }, 00:16:03.784 { 00:16:03.784 "name": "BaseBdev2", 00:16:03.784 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:03.784 "is_configured": true, 00:16:03.784 "data_offset": 2048, 00:16:03.784 "data_size": 63488 00:16:03.784 }, 00:16:03.784 { 00:16:03.784 "name": "BaseBdev3", 00:16:03.784 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:03.784 "is_configured": true, 00:16:03.784 "data_offset": 2048, 00:16:03.784 "data_size": 63488 00:16:03.784 } 00:16:03.784 ] 00:16:03.784 }' 00:16:03.784 09:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.784 09:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:03.784 09:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.784 09:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:03.784 09:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:03.784 09:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.784 09:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.784 [2024-11-15 09:34:52.203199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:03.784 [2024-11-15 09:34:52.221907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:16:03.784 09:34:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.784 09:34:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:03.784 [2024-11-15 09:34:52.230293] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.163 "name": "raid_bdev1", 00:16:05.163 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:05.163 "strip_size_kb": 64, 00:16:05.163 "state": "online", 00:16:05.163 "raid_level": "raid5f", 00:16:05.163 "superblock": true, 00:16:05.163 "num_base_bdevs": 3, 00:16:05.163 "num_base_bdevs_discovered": 3, 00:16:05.163 "num_base_bdevs_operational": 3, 00:16:05.163 "process": { 00:16:05.163 "type": "rebuild", 00:16:05.163 "target": "spare", 00:16:05.163 "progress": { 00:16:05.163 "blocks": 18432, 00:16:05.163 "percent": 14 00:16:05.163 } 00:16:05.163 }, 00:16:05.163 "base_bdevs_list": [ 00:16:05.163 { 00:16:05.163 "name": "spare", 00:16:05.163 "uuid": "53876ce4-ff39-5b8c-b51b-722e11018c05", 00:16:05.163 "is_configured": true, 00:16:05.163 "data_offset": 2048, 00:16:05.163 "data_size": 63488 00:16:05.163 }, 00:16:05.163 { 00:16:05.163 "name": "BaseBdev2", 00:16:05.163 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:05.163 "is_configured": true, 00:16:05.163 "data_offset": 2048, 00:16:05.163 "data_size": 63488 00:16:05.163 }, 00:16:05.163 { 00:16:05.163 "name": "BaseBdev3", 00:16:05.163 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:05.163 "is_configured": true, 00:16:05.163 "data_offset": 2048, 00:16:05.163 "data_size": 63488 00:16:05.163 } 00:16:05.163 ] 00:16:05.163 }' 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:05.163 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=587 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.163 "name": "raid_bdev1", 00:16:05.163 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:05.163 "strip_size_kb": 64, 00:16:05.163 "state": "online", 00:16:05.163 "raid_level": "raid5f", 00:16:05.163 "superblock": true, 00:16:05.163 "num_base_bdevs": 3, 00:16:05.163 "num_base_bdevs_discovered": 3, 00:16:05.163 "num_base_bdevs_operational": 3, 00:16:05.163 "process": { 00:16:05.163 "type": "rebuild", 00:16:05.163 "target": "spare", 00:16:05.163 "progress": { 00:16:05.163 "blocks": 22528, 00:16:05.163 "percent": 17 00:16:05.163 } 00:16:05.163 }, 00:16:05.163 "base_bdevs_list": [ 00:16:05.163 { 00:16:05.163 "name": "spare", 00:16:05.163 "uuid": "53876ce4-ff39-5b8c-b51b-722e11018c05", 00:16:05.163 "is_configured": true, 00:16:05.163 "data_offset": 2048, 00:16:05.163 "data_size": 63488 00:16:05.163 }, 00:16:05.163 { 00:16:05.163 "name": "BaseBdev2", 00:16:05.163 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:05.163 "is_configured": true, 00:16:05.163 "data_offset": 2048, 00:16:05.163 "data_size": 63488 00:16:05.163 }, 00:16:05.163 { 00:16:05.163 "name": "BaseBdev3", 00:16:05.163 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:05.163 "is_configured": true, 00:16:05.163 "data_offset": 2048, 00:16:05.163 "data_size": 63488 00:16:05.163 } 00:16:05.163 ] 00:16:05.163 }' 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.163 09:34:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:06.102 09:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:06.102 09:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.102 09:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.102 09:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.102 09:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.102 09:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.102 09:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.102 09:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.102 09:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.102 09:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.102 09:34:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.102 09:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.102 "name": "raid_bdev1", 00:16:06.102 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:06.102 "strip_size_kb": 64, 00:16:06.102 "state": "online", 00:16:06.102 "raid_level": "raid5f", 00:16:06.102 "superblock": true, 00:16:06.102 "num_base_bdevs": 3, 00:16:06.102 "num_base_bdevs_discovered": 3, 00:16:06.102 "num_base_bdevs_operational": 3, 00:16:06.102 "process": { 00:16:06.102 "type": "rebuild", 00:16:06.102 "target": "spare", 00:16:06.102 "progress": { 00:16:06.102 "blocks": 45056, 00:16:06.102 "percent": 35 00:16:06.102 } 00:16:06.102 }, 00:16:06.102 "base_bdevs_list": [ 00:16:06.102 { 00:16:06.102 "name": "spare", 00:16:06.102 "uuid": "53876ce4-ff39-5b8c-b51b-722e11018c05", 00:16:06.102 "is_configured": true, 00:16:06.102 "data_offset": 2048, 00:16:06.102 "data_size": 63488 00:16:06.102 }, 00:16:06.102 { 00:16:06.102 "name": "BaseBdev2", 00:16:06.102 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:06.102 "is_configured": true, 00:16:06.102 "data_offset": 2048, 00:16:06.102 "data_size": 63488 00:16:06.102 }, 00:16:06.102 { 00:16:06.102 "name": "BaseBdev3", 00:16:06.102 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:06.102 "is_configured": true, 00:16:06.102 "data_offset": 2048, 00:16:06.102 "data_size": 63488 00:16:06.102 } 00:16:06.102 ] 00:16:06.102 }' 00:16:06.362 09:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.362 09:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:06.362 09:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.362 09:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:06.362 09:34:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:07.299 09:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:07.299 09:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.299 09:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.299 09:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.299 09:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.299 09:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.299 09:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.299 09:34:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.299 09:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.299 09:34:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.299 09:34:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.299 09:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.299 "name": "raid_bdev1", 00:16:07.299 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:07.299 "strip_size_kb": 64, 00:16:07.299 "state": "online", 00:16:07.299 "raid_level": "raid5f", 00:16:07.299 "superblock": true, 00:16:07.299 "num_base_bdevs": 3, 00:16:07.299 "num_base_bdevs_discovered": 3, 00:16:07.299 "num_base_bdevs_operational": 3, 00:16:07.299 "process": { 00:16:07.299 "type": "rebuild", 00:16:07.299 "target": "spare", 00:16:07.299 "progress": { 00:16:07.299 "blocks": 69632, 00:16:07.299 "percent": 54 00:16:07.299 } 00:16:07.299 }, 00:16:07.299 "base_bdevs_list": [ 00:16:07.299 { 00:16:07.299 "name": "spare", 00:16:07.299 "uuid": "53876ce4-ff39-5b8c-b51b-722e11018c05", 00:16:07.299 "is_configured": true, 00:16:07.299 "data_offset": 2048, 00:16:07.299 "data_size": 63488 00:16:07.299 }, 00:16:07.299 { 00:16:07.299 "name": "BaseBdev2", 00:16:07.299 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:07.299 "is_configured": true, 00:16:07.299 "data_offset": 2048, 00:16:07.299 "data_size": 63488 00:16:07.299 }, 00:16:07.299 { 00:16:07.299 "name": "BaseBdev3", 00:16:07.299 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:07.299 "is_configured": true, 00:16:07.299 "data_offset": 2048, 00:16:07.299 "data_size": 63488 00:16:07.299 } 00:16:07.299 ] 00:16:07.299 }' 00:16:07.299 09:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.299 09:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.300 09:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.559 09:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.559 09:34:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:08.497 09:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:08.497 09:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.497 09:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.497 09:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.497 09:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.497 09:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.497 09:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.497 09:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.497 09:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.497 09:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.497 09:34:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.497 09:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.497 "name": "raid_bdev1", 00:16:08.497 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:08.497 "strip_size_kb": 64, 00:16:08.497 "state": "online", 00:16:08.497 "raid_level": "raid5f", 00:16:08.497 "superblock": true, 00:16:08.497 "num_base_bdevs": 3, 00:16:08.497 "num_base_bdevs_discovered": 3, 00:16:08.497 "num_base_bdevs_operational": 3, 00:16:08.497 "process": { 00:16:08.497 "type": "rebuild", 00:16:08.497 "target": "spare", 00:16:08.497 "progress": { 00:16:08.497 "blocks": 92160, 00:16:08.497 "percent": 72 00:16:08.497 } 00:16:08.497 }, 00:16:08.497 "base_bdevs_list": [ 00:16:08.497 { 00:16:08.497 "name": "spare", 00:16:08.497 "uuid": "53876ce4-ff39-5b8c-b51b-722e11018c05", 00:16:08.497 "is_configured": true, 00:16:08.497 "data_offset": 2048, 00:16:08.497 "data_size": 63488 00:16:08.497 }, 00:16:08.497 { 00:16:08.497 "name": "BaseBdev2", 00:16:08.497 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:08.497 "is_configured": true, 00:16:08.497 "data_offset": 2048, 00:16:08.497 "data_size": 63488 00:16:08.497 }, 00:16:08.497 { 00:16:08.497 "name": "BaseBdev3", 00:16:08.497 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:08.498 "is_configured": true, 00:16:08.498 "data_offset": 2048, 00:16:08.498 "data_size": 63488 00:16:08.498 } 00:16:08.498 ] 00:16:08.498 }' 00:16:08.498 09:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.498 09:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.498 09:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.498 09:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.498 09:34:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:09.878 09:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:09.878 09:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.878 09:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.878 09:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.878 09:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.878 09:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.878 09:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.878 09:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.878 09:34:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.878 09:34:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.878 09:34:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.878 09:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.878 "name": "raid_bdev1", 00:16:09.878 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:09.878 "strip_size_kb": 64, 00:16:09.878 "state": "online", 00:16:09.878 "raid_level": "raid5f", 00:16:09.878 "superblock": true, 00:16:09.878 "num_base_bdevs": 3, 00:16:09.878 "num_base_bdevs_discovered": 3, 00:16:09.878 "num_base_bdevs_operational": 3, 00:16:09.878 "process": { 00:16:09.878 "type": "rebuild", 00:16:09.878 "target": "spare", 00:16:09.878 "progress": { 00:16:09.878 "blocks": 114688, 00:16:09.878 "percent": 90 00:16:09.878 } 00:16:09.878 }, 00:16:09.878 "base_bdevs_list": [ 00:16:09.878 { 00:16:09.878 "name": "spare", 00:16:09.878 "uuid": "53876ce4-ff39-5b8c-b51b-722e11018c05", 00:16:09.878 "is_configured": true, 00:16:09.878 "data_offset": 2048, 00:16:09.878 "data_size": 63488 00:16:09.878 }, 00:16:09.878 { 00:16:09.878 "name": "BaseBdev2", 00:16:09.878 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:09.878 "is_configured": true, 00:16:09.878 "data_offset": 2048, 00:16:09.878 "data_size": 63488 00:16:09.878 }, 00:16:09.878 { 00:16:09.878 "name": "BaseBdev3", 00:16:09.878 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:09.878 "is_configured": true, 00:16:09.878 "data_offset": 2048, 00:16:09.878 "data_size": 63488 00:16:09.878 } 00:16:09.878 ] 00:16:09.878 }' 00:16:09.878 09:34:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.878 09:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.878 09:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.878 09:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.878 09:34:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:10.138 [2024-11-15 09:34:58.500147] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:10.138 [2024-11-15 09:34:58.500263] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:10.138 [2024-11-15 09:34:58.500407] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.708 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:10.708 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.708 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.708 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.708 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.708 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.708 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.708 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.708 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.708 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.708 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.708 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.708 "name": "raid_bdev1", 00:16:10.708 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:10.708 "strip_size_kb": 64, 00:16:10.708 "state": "online", 00:16:10.708 "raid_level": "raid5f", 00:16:10.708 "superblock": true, 00:16:10.708 "num_base_bdevs": 3, 00:16:10.708 "num_base_bdevs_discovered": 3, 00:16:10.708 "num_base_bdevs_operational": 3, 00:16:10.708 "base_bdevs_list": [ 00:16:10.708 { 00:16:10.708 "name": "spare", 00:16:10.708 "uuid": "53876ce4-ff39-5b8c-b51b-722e11018c05", 00:16:10.708 "is_configured": true, 00:16:10.708 "data_offset": 2048, 00:16:10.708 "data_size": 63488 00:16:10.708 }, 00:16:10.708 { 00:16:10.708 "name": "BaseBdev2", 00:16:10.708 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:10.708 "is_configured": true, 00:16:10.708 "data_offset": 2048, 00:16:10.708 "data_size": 63488 00:16:10.708 }, 00:16:10.708 { 00:16:10.708 "name": "BaseBdev3", 00:16:10.708 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:10.708 "is_configured": true, 00:16:10.708 "data_offset": 2048, 00:16:10.708 "data_size": 63488 00:16:10.708 } 00:16:10.708 ] 00:16:10.708 }' 00:16:10.708 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.969 "name": "raid_bdev1", 00:16:10.969 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:10.969 "strip_size_kb": 64, 00:16:10.969 "state": "online", 00:16:10.969 "raid_level": "raid5f", 00:16:10.969 "superblock": true, 00:16:10.969 "num_base_bdevs": 3, 00:16:10.969 "num_base_bdevs_discovered": 3, 00:16:10.969 "num_base_bdevs_operational": 3, 00:16:10.969 "base_bdevs_list": [ 00:16:10.969 { 00:16:10.969 "name": "spare", 00:16:10.969 "uuid": "53876ce4-ff39-5b8c-b51b-722e11018c05", 00:16:10.969 "is_configured": true, 00:16:10.969 "data_offset": 2048, 00:16:10.969 "data_size": 63488 00:16:10.969 }, 00:16:10.969 { 00:16:10.969 "name": "BaseBdev2", 00:16:10.969 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:10.969 "is_configured": true, 00:16:10.969 "data_offset": 2048, 00:16:10.969 "data_size": 63488 00:16:10.969 }, 00:16:10.969 { 00:16:10.969 "name": "BaseBdev3", 00:16:10.969 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:10.969 "is_configured": true, 00:16:10.969 "data_offset": 2048, 00:16:10.969 "data_size": 63488 00:16:10.969 } 00:16:10.969 ] 00:16:10.969 }' 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.969 "name": "raid_bdev1", 00:16:10.969 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:10.969 "strip_size_kb": 64, 00:16:10.969 "state": "online", 00:16:10.969 "raid_level": "raid5f", 00:16:10.969 "superblock": true, 00:16:10.969 "num_base_bdevs": 3, 00:16:10.969 "num_base_bdevs_discovered": 3, 00:16:10.969 "num_base_bdevs_operational": 3, 00:16:10.969 "base_bdevs_list": [ 00:16:10.969 { 00:16:10.969 "name": "spare", 00:16:10.969 "uuid": "53876ce4-ff39-5b8c-b51b-722e11018c05", 00:16:10.969 "is_configured": true, 00:16:10.969 "data_offset": 2048, 00:16:10.969 "data_size": 63488 00:16:10.969 }, 00:16:10.969 { 00:16:10.969 "name": "BaseBdev2", 00:16:10.969 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:10.969 "is_configured": true, 00:16:10.969 "data_offset": 2048, 00:16:10.969 "data_size": 63488 00:16:10.969 }, 00:16:10.969 { 00:16:10.969 "name": "BaseBdev3", 00:16:10.969 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:10.969 "is_configured": true, 00:16:10.969 "data_offset": 2048, 00:16:10.969 "data_size": 63488 00:16:10.969 } 00:16:10.969 ] 00:16:10.969 }' 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.969 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.539 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:11.539 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.539 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.539 [2024-11-15 09:34:59.803411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:11.539 [2024-11-15 09:34:59.803461] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:11.539 [2024-11-15 09:34:59.803558] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.539 [2024-11-15 09:34:59.803646] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:11.539 [2024-11-15 09:34:59.803668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:11.539 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.539 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.539 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:11.539 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.539 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.539 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.539 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:11.539 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:11.539 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:11.539 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:11.539 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:11.539 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:11.539 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:11.539 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:11.539 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:11.539 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:11.539 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:11.539 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:11.539 09:34:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:11.799 /dev/nbd0 00:16:11.799 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:11.799 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:11.799 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:11.799 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:16:11.799 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:11.799 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:11.799 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:11.799 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:16:11.799 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:11.799 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:11.799 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:11.799 1+0 records in 00:16:11.799 1+0 records out 00:16:11.799 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435157 s, 9.4 MB/s 00:16:11.799 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.799 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:16:11.799 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:11.799 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:11.799 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:16:11.799 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:11.799 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:11.799 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:12.058 /dev/nbd1 00:16:12.058 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:12.058 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:12.058 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:12.058 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:16:12.058 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:12.058 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:12.058 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:12.058 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:16:12.058 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:12.058 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:12.058 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:12.058 1+0 records in 00:16:12.058 1+0 records out 00:16:12.058 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460858 s, 8.9 MB/s 00:16:12.058 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:12.059 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:16:12.059 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:12.059 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:12.059 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:16:12.059 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:12.059 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:12.059 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:12.318 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:12.318 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:12.318 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:12.318 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:12.318 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:12.318 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:12.318 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:12.577 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:12.577 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:12.577 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:12.577 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:12.577 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:12.577 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:12.578 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:12.578 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:12.578 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:12.578 09:35:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:12.578 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:12.578 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:12.578 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:12.578 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:12.578 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:12.578 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:12.578 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:12.578 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:12.578 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:12.578 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:12.578 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.578 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.838 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.838 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:12.838 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.838 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.838 [2024-11-15 09:35:01.047737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:12.838 [2024-11-15 09:35:01.047813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.838 [2024-11-15 09:35:01.047837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:12.838 [2024-11-15 09:35:01.047867] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.838 [2024-11-15 09:35:01.050732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.838 [2024-11-15 09:35:01.050776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:12.838 [2024-11-15 09:35:01.050910] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:12.838 [2024-11-15 09:35:01.050996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:12.838 [2024-11-15 09:35:01.051168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:12.838 [2024-11-15 09:35:01.051299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:12.838 spare 00:16:12.838 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.838 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:12.838 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.838 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.838 [2024-11-15 09:35:01.151219] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:12.838 [2024-11-15 09:35:01.151267] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:12.838 [2024-11-15 09:35:01.151643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:16:12.838 [2024-11-15 09:35:01.157678] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:12.838 [2024-11-15 09:35:01.157703] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:12.839 [2024-11-15 09:35:01.157959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.839 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.839 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:12.839 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.839 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.839 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.839 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.839 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:12.839 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.839 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.839 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.839 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.839 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.839 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.839 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.839 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.839 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.839 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.839 "name": "raid_bdev1", 00:16:12.839 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:12.839 "strip_size_kb": 64, 00:16:12.839 "state": "online", 00:16:12.839 "raid_level": "raid5f", 00:16:12.839 "superblock": true, 00:16:12.839 "num_base_bdevs": 3, 00:16:12.839 "num_base_bdevs_discovered": 3, 00:16:12.839 "num_base_bdevs_operational": 3, 00:16:12.839 "base_bdevs_list": [ 00:16:12.839 { 00:16:12.839 "name": "spare", 00:16:12.839 "uuid": "53876ce4-ff39-5b8c-b51b-722e11018c05", 00:16:12.839 "is_configured": true, 00:16:12.839 "data_offset": 2048, 00:16:12.839 "data_size": 63488 00:16:12.839 }, 00:16:12.839 { 00:16:12.839 "name": "BaseBdev2", 00:16:12.839 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:12.839 "is_configured": true, 00:16:12.839 "data_offset": 2048, 00:16:12.839 "data_size": 63488 00:16:12.839 }, 00:16:12.839 { 00:16:12.839 "name": "BaseBdev3", 00:16:12.839 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:12.839 "is_configured": true, 00:16:12.839 "data_offset": 2048, 00:16:12.839 "data_size": 63488 00:16:12.839 } 00:16:12.839 ] 00:16:12.839 }' 00:16:12.839 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.839 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.407 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:13.407 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.407 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:13.407 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:13.407 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.407 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.407 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.407 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.407 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.407 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.407 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.407 "name": "raid_bdev1", 00:16:13.407 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:13.407 "strip_size_kb": 64, 00:16:13.407 "state": "online", 00:16:13.407 "raid_level": "raid5f", 00:16:13.407 "superblock": true, 00:16:13.407 "num_base_bdevs": 3, 00:16:13.407 "num_base_bdevs_discovered": 3, 00:16:13.407 "num_base_bdevs_operational": 3, 00:16:13.407 "base_bdevs_list": [ 00:16:13.407 { 00:16:13.407 "name": "spare", 00:16:13.407 "uuid": "53876ce4-ff39-5b8c-b51b-722e11018c05", 00:16:13.407 "is_configured": true, 00:16:13.407 "data_offset": 2048, 00:16:13.407 "data_size": 63488 00:16:13.407 }, 00:16:13.407 { 00:16:13.407 "name": "BaseBdev2", 00:16:13.407 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:13.407 "is_configured": true, 00:16:13.407 "data_offset": 2048, 00:16:13.407 "data_size": 63488 00:16:13.407 }, 00:16:13.407 { 00:16:13.407 "name": "BaseBdev3", 00:16:13.407 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:13.407 "is_configured": true, 00:16:13.407 "data_offset": 2048, 00:16:13.407 "data_size": 63488 00:16:13.407 } 00:16:13.407 ] 00:16:13.407 }' 00:16:13.407 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.408 [2024-11-15 09:35:01.752950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.408 "name": "raid_bdev1", 00:16:13.408 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:13.408 "strip_size_kb": 64, 00:16:13.408 "state": "online", 00:16:13.408 "raid_level": "raid5f", 00:16:13.408 "superblock": true, 00:16:13.408 "num_base_bdevs": 3, 00:16:13.408 "num_base_bdevs_discovered": 2, 00:16:13.408 "num_base_bdevs_operational": 2, 00:16:13.408 "base_bdevs_list": [ 00:16:13.408 { 00:16:13.408 "name": null, 00:16:13.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.408 "is_configured": false, 00:16:13.408 "data_offset": 0, 00:16:13.408 "data_size": 63488 00:16:13.408 }, 00:16:13.408 { 00:16:13.408 "name": "BaseBdev2", 00:16:13.408 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:13.408 "is_configured": true, 00:16:13.408 "data_offset": 2048, 00:16:13.408 "data_size": 63488 00:16:13.408 }, 00:16:13.408 { 00:16:13.408 "name": "BaseBdev3", 00:16:13.408 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:13.408 "is_configured": true, 00:16:13.408 "data_offset": 2048, 00:16:13.408 "data_size": 63488 00:16:13.408 } 00:16:13.408 ] 00:16:13.408 }' 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.408 09:35:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.975 09:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:13.976 09:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.976 09:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.976 [2024-11-15 09:35:02.156287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:13.976 [2024-11-15 09:35:02.156566] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:13.976 [2024-11-15 09:35:02.156597] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:13.976 [2024-11-15 09:35:02.156654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:13.976 [2024-11-15 09:35:02.174861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:16:13.976 09:35:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.976 09:35:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:13.976 [2024-11-15 09:35:02.183229] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:14.914 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.914 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.914 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.914 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.914 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.914 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.914 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.914 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.914 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.914 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.914 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.915 "name": "raid_bdev1", 00:16:14.915 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:14.915 "strip_size_kb": 64, 00:16:14.915 "state": "online", 00:16:14.915 "raid_level": "raid5f", 00:16:14.915 "superblock": true, 00:16:14.915 "num_base_bdevs": 3, 00:16:14.915 "num_base_bdevs_discovered": 3, 00:16:14.915 "num_base_bdevs_operational": 3, 00:16:14.915 "process": { 00:16:14.915 "type": "rebuild", 00:16:14.915 "target": "spare", 00:16:14.915 "progress": { 00:16:14.915 "blocks": 20480, 00:16:14.915 "percent": 16 00:16:14.915 } 00:16:14.915 }, 00:16:14.915 "base_bdevs_list": [ 00:16:14.915 { 00:16:14.915 "name": "spare", 00:16:14.915 "uuid": "53876ce4-ff39-5b8c-b51b-722e11018c05", 00:16:14.915 "is_configured": true, 00:16:14.915 "data_offset": 2048, 00:16:14.915 "data_size": 63488 00:16:14.915 }, 00:16:14.915 { 00:16:14.915 "name": "BaseBdev2", 00:16:14.915 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:14.915 "is_configured": true, 00:16:14.915 "data_offset": 2048, 00:16:14.915 "data_size": 63488 00:16:14.915 }, 00:16:14.915 { 00:16:14.915 "name": "BaseBdev3", 00:16:14.915 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:14.915 "is_configured": true, 00:16:14.915 "data_offset": 2048, 00:16:14.915 "data_size": 63488 00:16:14.915 } 00:16:14.915 ] 00:16:14.915 }' 00:16:14.915 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.915 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.915 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.915 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.915 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:14.915 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.915 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.915 [2024-11-15 09:35:03.314608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:15.175 [2024-11-15 09:35:03.396155] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:15.175 [2024-11-15 09:35:03.396254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.175 [2024-11-15 09:35:03.396274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:15.175 [2024-11-15 09:35:03.396286] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:15.175 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.175 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:15.175 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.175 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.175 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.175 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.175 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:15.175 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.175 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.175 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.175 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.175 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.175 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.175 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.175 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.175 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.175 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.175 "name": "raid_bdev1", 00:16:15.175 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:15.175 "strip_size_kb": 64, 00:16:15.175 "state": "online", 00:16:15.175 "raid_level": "raid5f", 00:16:15.175 "superblock": true, 00:16:15.175 "num_base_bdevs": 3, 00:16:15.175 "num_base_bdevs_discovered": 2, 00:16:15.175 "num_base_bdevs_operational": 2, 00:16:15.175 "base_bdevs_list": [ 00:16:15.175 { 00:16:15.175 "name": null, 00:16:15.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.175 "is_configured": false, 00:16:15.175 "data_offset": 0, 00:16:15.175 "data_size": 63488 00:16:15.175 }, 00:16:15.175 { 00:16:15.175 "name": "BaseBdev2", 00:16:15.175 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:15.175 "is_configured": true, 00:16:15.175 "data_offset": 2048, 00:16:15.175 "data_size": 63488 00:16:15.175 }, 00:16:15.175 { 00:16:15.175 "name": "BaseBdev3", 00:16:15.175 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:15.175 "is_configured": true, 00:16:15.175 "data_offset": 2048, 00:16:15.175 "data_size": 63488 00:16:15.175 } 00:16:15.175 ] 00:16:15.175 }' 00:16:15.175 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.175 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.436 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:15.436 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.436 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.436 [2024-11-15 09:35:03.895350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:15.436 [2024-11-15 09:35:03.895440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.436 [2024-11-15 09:35:03.895471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:16:15.436 [2024-11-15 09:35:03.895489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.436 [2024-11-15 09:35:03.896156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.436 [2024-11-15 09:35:03.896196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:15.436 [2024-11-15 09:35:03.896329] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:15.436 [2024-11-15 09:35:03.896353] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:15.436 [2024-11-15 09:35:03.896367] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:15.436 [2024-11-15 09:35:03.896404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:15.699 [2024-11-15 09:35:03.913181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:16:15.699 spare 00:16:15.699 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.699 09:35:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:15.699 [2024-11-15 09:35:03.920825] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:16.638 09:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.638 09:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.638 09:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.638 09:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.638 09:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.638 09:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.638 09:35:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.638 09:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.638 09:35:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.638 09:35:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.638 09:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.638 "name": "raid_bdev1", 00:16:16.638 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:16.638 "strip_size_kb": 64, 00:16:16.638 "state": "online", 00:16:16.638 "raid_level": "raid5f", 00:16:16.638 "superblock": true, 00:16:16.638 "num_base_bdevs": 3, 00:16:16.638 "num_base_bdevs_discovered": 3, 00:16:16.638 "num_base_bdevs_operational": 3, 00:16:16.638 "process": { 00:16:16.638 "type": "rebuild", 00:16:16.638 "target": "spare", 00:16:16.638 "progress": { 00:16:16.638 "blocks": 20480, 00:16:16.638 "percent": 16 00:16:16.638 } 00:16:16.638 }, 00:16:16.638 "base_bdevs_list": [ 00:16:16.638 { 00:16:16.638 "name": "spare", 00:16:16.638 "uuid": "53876ce4-ff39-5b8c-b51b-722e11018c05", 00:16:16.638 "is_configured": true, 00:16:16.638 "data_offset": 2048, 00:16:16.638 "data_size": 63488 00:16:16.638 }, 00:16:16.638 { 00:16:16.638 "name": "BaseBdev2", 00:16:16.638 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:16.638 "is_configured": true, 00:16:16.638 "data_offset": 2048, 00:16:16.638 "data_size": 63488 00:16:16.638 }, 00:16:16.638 { 00:16:16.638 "name": "BaseBdev3", 00:16:16.638 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:16.638 "is_configured": true, 00:16:16.638 "data_offset": 2048, 00:16:16.638 "data_size": 63488 00:16:16.638 } 00:16:16.638 ] 00:16:16.638 }' 00:16:16.638 09:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.638 09:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.638 09:35:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.638 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.638 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:16.638 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.638 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.638 [2024-11-15 09:35:05.052411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:16.897 [2024-11-15 09:35:05.134040] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:16.897 [2024-11-15 09:35:05.134112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.897 [2024-11-15 09:35:05.134134] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:16.898 [2024-11-15 09:35:05.134141] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:16.898 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.898 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:16.898 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.898 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.898 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.898 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.898 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:16.898 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.898 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.898 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.898 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.898 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.898 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.898 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.898 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.898 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.898 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.898 "name": "raid_bdev1", 00:16:16.898 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:16.898 "strip_size_kb": 64, 00:16:16.898 "state": "online", 00:16:16.898 "raid_level": "raid5f", 00:16:16.898 "superblock": true, 00:16:16.898 "num_base_bdevs": 3, 00:16:16.898 "num_base_bdevs_discovered": 2, 00:16:16.898 "num_base_bdevs_operational": 2, 00:16:16.898 "base_bdevs_list": [ 00:16:16.898 { 00:16:16.898 "name": null, 00:16:16.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.898 "is_configured": false, 00:16:16.898 "data_offset": 0, 00:16:16.898 "data_size": 63488 00:16:16.898 }, 00:16:16.898 { 00:16:16.898 "name": "BaseBdev2", 00:16:16.898 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:16.898 "is_configured": true, 00:16:16.898 "data_offset": 2048, 00:16:16.898 "data_size": 63488 00:16:16.898 }, 00:16:16.898 { 00:16:16.898 "name": "BaseBdev3", 00:16:16.898 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:16.898 "is_configured": true, 00:16:16.898 "data_offset": 2048, 00:16:16.898 "data_size": 63488 00:16:16.898 } 00:16:16.898 ] 00:16:16.898 }' 00:16:16.898 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.898 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.465 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:17.465 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.465 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:17.465 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:17.465 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.465 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.465 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.465 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.465 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.465 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.465 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.465 "name": "raid_bdev1", 00:16:17.465 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:17.465 "strip_size_kb": 64, 00:16:17.465 "state": "online", 00:16:17.465 "raid_level": "raid5f", 00:16:17.465 "superblock": true, 00:16:17.465 "num_base_bdevs": 3, 00:16:17.465 "num_base_bdevs_discovered": 2, 00:16:17.466 "num_base_bdevs_operational": 2, 00:16:17.466 "base_bdevs_list": [ 00:16:17.466 { 00:16:17.466 "name": null, 00:16:17.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.466 "is_configured": false, 00:16:17.466 "data_offset": 0, 00:16:17.466 "data_size": 63488 00:16:17.466 }, 00:16:17.466 { 00:16:17.466 "name": "BaseBdev2", 00:16:17.466 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:17.466 "is_configured": true, 00:16:17.466 "data_offset": 2048, 00:16:17.466 "data_size": 63488 00:16:17.466 }, 00:16:17.466 { 00:16:17.466 "name": "BaseBdev3", 00:16:17.466 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:17.466 "is_configured": true, 00:16:17.466 "data_offset": 2048, 00:16:17.466 "data_size": 63488 00:16:17.466 } 00:16:17.466 ] 00:16:17.466 }' 00:16:17.466 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.466 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:17.466 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.466 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:17.466 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:17.466 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.466 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.466 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.466 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:17.466 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.466 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.466 [2024-11-15 09:35:05.804881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:17.466 [2024-11-15 09:35:05.804955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.466 [2024-11-15 09:35:05.804986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:17.466 [2024-11-15 09:35:05.804997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.466 [2024-11-15 09:35:05.805559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.466 [2024-11-15 09:35:05.805585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:17.466 [2024-11-15 09:35:05.805690] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:17.466 [2024-11-15 09:35:05.805714] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:17.466 [2024-11-15 09:35:05.805745] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:17.466 [2024-11-15 09:35:05.805757] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:17.466 BaseBdev1 00:16:17.466 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.466 09:35:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:18.398 09:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:18.398 09:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.398 09:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.398 09:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.398 09:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.398 09:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:18.398 09:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.398 09:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.398 09:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.398 09:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.398 09:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.398 09:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.398 09:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.398 09:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.398 09:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.655 09:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.656 "name": "raid_bdev1", 00:16:18.656 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:18.656 "strip_size_kb": 64, 00:16:18.656 "state": "online", 00:16:18.656 "raid_level": "raid5f", 00:16:18.656 "superblock": true, 00:16:18.656 "num_base_bdevs": 3, 00:16:18.656 "num_base_bdevs_discovered": 2, 00:16:18.656 "num_base_bdevs_operational": 2, 00:16:18.656 "base_bdevs_list": [ 00:16:18.656 { 00:16:18.656 "name": null, 00:16:18.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.656 "is_configured": false, 00:16:18.656 "data_offset": 0, 00:16:18.656 "data_size": 63488 00:16:18.656 }, 00:16:18.656 { 00:16:18.656 "name": "BaseBdev2", 00:16:18.656 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:18.656 "is_configured": true, 00:16:18.656 "data_offset": 2048, 00:16:18.656 "data_size": 63488 00:16:18.656 }, 00:16:18.656 { 00:16:18.656 "name": "BaseBdev3", 00:16:18.656 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:18.656 "is_configured": true, 00:16:18.656 "data_offset": 2048, 00:16:18.656 "data_size": 63488 00:16:18.656 } 00:16:18.656 ] 00:16:18.656 }' 00:16:18.656 09:35:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.656 09:35:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.914 "name": "raid_bdev1", 00:16:18.914 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:18.914 "strip_size_kb": 64, 00:16:18.914 "state": "online", 00:16:18.914 "raid_level": "raid5f", 00:16:18.914 "superblock": true, 00:16:18.914 "num_base_bdevs": 3, 00:16:18.914 "num_base_bdevs_discovered": 2, 00:16:18.914 "num_base_bdevs_operational": 2, 00:16:18.914 "base_bdevs_list": [ 00:16:18.914 { 00:16:18.914 "name": null, 00:16:18.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.914 "is_configured": false, 00:16:18.914 "data_offset": 0, 00:16:18.914 "data_size": 63488 00:16:18.914 }, 00:16:18.914 { 00:16:18.914 "name": "BaseBdev2", 00:16:18.914 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:18.914 "is_configured": true, 00:16:18.914 "data_offset": 2048, 00:16:18.914 "data_size": 63488 00:16:18.914 }, 00:16:18.914 { 00:16:18.914 "name": "BaseBdev3", 00:16:18.914 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:18.914 "is_configured": true, 00:16:18.914 "data_offset": 2048, 00:16:18.914 "data_size": 63488 00:16:18.914 } 00:16:18.914 ] 00:16:18.914 }' 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.914 [2024-11-15 09:35:07.338374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.914 [2024-11-15 09:35:07.338590] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:18.914 [2024-11-15 09:35:07.338617] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:18.914 request: 00:16:18.914 { 00:16:18.914 "base_bdev": "BaseBdev1", 00:16:18.914 "raid_bdev": "raid_bdev1", 00:16:18.914 "method": "bdev_raid_add_base_bdev", 00:16:18.914 "req_id": 1 00:16:18.914 } 00:16:18.914 Got JSON-RPC error response 00:16:18.914 response: 00:16:18.914 { 00:16:18.914 "code": -22, 00:16:18.914 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:18.914 } 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:18.914 09:35:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:20.289 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:20.289 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.289 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.289 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.289 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.289 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:20.289 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.289 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.289 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.289 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.289 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.289 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.289 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.289 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.289 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.289 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.289 "name": "raid_bdev1", 00:16:20.289 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:20.289 "strip_size_kb": 64, 00:16:20.289 "state": "online", 00:16:20.289 "raid_level": "raid5f", 00:16:20.289 "superblock": true, 00:16:20.289 "num_base_bdevs": 3, 00:16:20.289 "num_base_bdevs_discovered": 2, 00:16:20.289 "num_base_bdevs_operational": 2, 00:16:20.289 "base_bdevs_list": [ 00:16:20.289 { 00:16:20.289 "name": null, 00:16:20.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.289 "is_configured": false, 00:16:20.289 "data_offset": 0, 00:16:20.289 "data_size": 63488 00:16:20.289 }, 00:16:20.289 { 00:16:20.289 "name": "BaseBdev2", 00:16:20.289 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:20.289 "is_configured": true, 00:16:20.289 "data_offset": 2048, 00:16:20.289 "data_size": 63488 00:16:20.289 }, 00:16:20.289 { 00:16:20.289 "name": "BaseBdev3", 00:16:20.289 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:20.289 "is_configured": true, 00:16:20.289 "data_offset": 2048, 00:16:20.289 "data_size": 63488 00:16:20.289 } 00:16:20.289 ] 00:16:20.289 }' 00:16:20.289 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.289 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.546 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:20.546 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.546 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:20.546 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:20.546 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.546 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.546 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.546 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.546 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.546 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.546 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.546 "name": "raid_bdev1", 00:16:20.546 "uuid": "72a0d667-69d4-4cbe-a5a1-a8aed385b1e2", 00:16:20.546 "strip_size_kb": 64, 00:16:20.546 "state": "online", 00:16:20.546 "raid_level": "raid5f", 00:16:20.546 "superblock": true, 00:16:20.546 "num_base_bdevs": 3, 00:16:20.546 "num_base_bdevs_discovered": 2, 00:16:20.546 "num_base_bdevs_operational": 2, 00:16:20.546 "base_bdevs_list": [ 00:16:20.546 { 00:16:20.546 "name": null, 00:16:20.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.546 "is_configured": false, 00:16:20.546 "data_offset": 0, 00:16:20.546 "data_size": 63488 00:16:20.546 }, 00:16:20.546 { 00:16:20.546 "name": "BaseBdev2", 00:16:20.546 "uuid": "4e630884-08e7-55d0-be6e-f4f9ee5e38c2", 00:16:20.546 "is_configured": true, 00:16:20.546 "data_offset": 2048, 00:16:20.546 "data_size": 63488 00:16:20.546 }, 00:16:20.546 { 00:16:20.546 "name": "BaseBdev3", 00:16:20.546 "uuid": "29130ba3-ec41-5b42-b43a-9ea8d9ff9dbb", 00:16:20.546 "is_configured": true, 00:16:20.546 "data_offset": 2048, 00:16:20.546 "data_size": 63488 00:16:20.546 } 00:16:20.546 ] 00:16:20.546 }' 00:16:20.546 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.546 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:20.546 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.546 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:20.546 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82446 00:16:20.546 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 82446 ']' 00:16:20.546 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 82446 00:16:20.546 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:16:20.546 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:20.546 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82446 00:16:20.546 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:20.546 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:20.546 killing process with pid 82446 00:16:20.546 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82446' 00:16:20.546 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 82446 00:16:20.546 Received shutdown signal, test time was about 60.000000 seconds 00:16:20.546 00:16:20.546 Latency(us) 00:16:20.546 [2024-11-15T09:35:09.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.546 [2024-11-15T09:35:09.010Z] =================================================================================================================== 00:16:20.547 [2024-11-15T09:35:09.010Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:20.547 [2024-11-15 09:35:08.929608] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:20.547 [2024-11-15 09:35:08.929785] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.547 09:35:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 82446 00:16:20.547 [2024-11-15 09:35:08.929898] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:20.547 [2024-11-15 09:35:08.929918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:21.112 [2024-11-15 09:35:09.362114] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:22.486 09:35:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:22.486 00:16:22.486 real 0m23.377s 00:16:22.486 user 0m29.471s 00:16:22.486 sys 0m2.999s 00:16:22.486 09:35:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:22.486 09:35:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.486 ************************************ 00:16:22.486 END TEST raid5f_rebuild_test_sb 00:16:22.486 ************************************ 00:16:22.486 09:35:10 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:22.486 09:35:10 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:22.486 09:35:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:22.486 09:35:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:22.486 09:35:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:22.486 ************************************ 00:16:22.486 START TEST raid5f_state_function_test 00:16:22.486 ************************************ 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 false 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83194 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83194' 00:16:22.486 Process raid pid: 83194 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83194 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 83194 ']' 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:22.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:22.486 09:35:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.486 [2024-11-15 09:35:10.711695] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:16:22.486 [2024-11-15 09:35:10.711828] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.486 [2024-11-15 09:35:10.893545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.743 [2024-11-15 09:35:11.037827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.002 [2024-11-15 09:35:11.283831] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:23.002 [2024-11-15 09:35:11.283905] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:23.280 09:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:23.280 09:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:16:23.280 09:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:23.280 09:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.280 09:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.280 [2024-11-15 09:35:11.565059] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:23.280 [2024-11-15 09:35:11.565124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:23.280 [2024-11-15 09:35:11.565135] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:23.280 [2024-11-15 09:35:11.565146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:23.280 [2024-11-15 09:35:11.565153] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:23.280 [2024-11-15 09:35:11.565163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:23.280 [2024-11-15 09:35:11.565169] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:23.280 [2024-11-15 09:35:11.565179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:23.280 09:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.280 09:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:23.280 09:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.280 09:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.280 09:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.280 09:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.280 09:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.280 09:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.280 09:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.280 09:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.280 09:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.280 09:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.280 09:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.280 09:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.280 09:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.281 09:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.281 09:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.281 "name": "Existed_Raid", 00:16:23.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.281 "strip_size_kb": 64, 00:16:23.281 "state": "configuring", 00:16:23.281 "raid_level": "raid5f", 00:16:23.281 "superblock": false, 00:16:23.281 "num_base_bdevs": 4, 00:16:23.281 "num_base_bdevs_discovered": 0, 00:16:23.281 "num_base_bdevs_operational": 4, 00:16:23.281 "base_bdevs_list": [ 00:16:23.281 { 00:16:23.281 "name": "BaseBdev1", 00:16:23.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.281 "is_configured": false, 00:16:23.281 "data_offset": 0, 00:16:23.281 "data_size": 0 00:16:23.281 }, 00:16:23.281 { 00:16:23.281 "name": "BaseBdev2", 00:16:23.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.281 "is_configured": false, 00:16:23.281 "data_offset": 0, 00:16:23.281 "data_size": 0 00:16:23.281 }, 00:16:23.281 { 00:16:23.281 "name": "BaseBdev3", 00:16:23.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.281 "is_configured": false, 00:16:23.281 "data_offset": 0, 00:16:23.281 "data_size": 0 00:16:23.281 }, 00:16:23.281 { 00:16:23.281 "name": "BaseBdev4", 00:16:23.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.281 "is_configured": false, 00:16:23.281 "data_offset": 0, 00:16:23.281 "data_size": 0 00:16:23.281 } 00:16:23.281 ] 00:16:23.281 }' 00:16:23.281 09:35:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.281 09:35:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.848 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:23.848 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.848 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.848 [2024-11-15 09:35:12.032276] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:23.848 [2024-11-15 09:35:12.032333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:23.848 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.848 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:23.848 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.848 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.848 [2024-11-15 09:35:12.044232] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:23.848 [2024-11-15 09:35:12.044291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:23.848 [2024-11-15 09:35:12.044304] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:23.848 [2024-11-15 09:35:12.044317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:23.848 [2024-11-15 09:35:12.044324] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:23.848 [2024-11-15 09:35:12.044336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:23.848 [2024-11-15 09:35:12.044343] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:23.848 [2024-11-15 09:35:12.044354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:23.848 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.848 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:23.848 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.848 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.848 [2024-11-15 09:35:12.097139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:23.848 BaseBdev1 00:16:23.848 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.848 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:23.848 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:23.848 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:23.848 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:23.848 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:23.848 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:23.848 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:23.848 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.848 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.848 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.848 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:23.848 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.848 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.848 [ 00:16:23.848 { 00:16:23.848 "name": "BaseBdev1", 00:16:23.848 "aliases": [ 00:16:23.848 "1759ea22-9d51-411a-b575-912caa15d031" 00:16:23.848 ], 00:16:23.848 "product_name": "Malloc disk", 00:16:23.848 "block_size": 512, 00:16:23.848 "num_blocks": 65536, 00:16:23.848 "uuid": "1759ea22-9d51-411a-b575-912caa15d031", 00:16:23.848 "assigned_rate_limits": { 00:16:23.848 "rw_ios_per_sec": 0, 00:16:23.848 "rw_mbytes_per_sec": 0, 00:16:23.848 "r_mbytes_per_sec": 0, 00:16:23.848 "w_mbytes_per_sec": 0 00:16:23.848 }, 00:16:23.848 "claimed": true, 00:16:23.848 "claim_type": "exclusive_write", 00:16:23.848 "zoned": false, 00:16:23.848 "supported_io_types": { 00:16:23.848 "read": true, 00:16:23.848 "write": true, 00:16:23.848 "unmap": true, 00:16:23.848 "flush": true, 00:16:23.848 "reset": true, 00:16:23.848 "nvme_admin": false, 00:16:23.848 "nvme_io": false, 00:16:23.848 "nvme_io_md": false, 00:16:23.848 "write_zeroes": true, 00:16:23.848 "zcopy": true, 00:16:23.848 "get_zone_info": false, 00:16:23.848 "zone_management": false, 00:16:23.848 "zone_append": false, 00:16:23.848 "compare": false, 00:16:23.848 "compare_and_write": false, 00:16:23.848 "abort": true, 00:16:23.848 "seek_hole": false, 00:16:23.848 "seek_data": false, 00:16:23.848 "copy": true, 00:16:23.849 "nvme_iov_md": false 00:16:23.849 }, 00:16:23.849 "memory_domains": [ 00:16:23.849 { 00:16:23.849 "dma_device_id": "system", 00:16:23.849 "dma_device_type": 1 00:16:23.849 }, 00:16:23.849 { 00:16:23.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.849 "dma_device_type": 2 00:16:23.849 } 00:16:23.849 ], 00:16:23.849 "driver_specific": {} 00:16:23.849 } 00:16:23.849 ] 00:16:23.849 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.849 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:23.849 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:23.849 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.849 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.849 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.849 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.849 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.849 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.849 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.849 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.849 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.849 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.849 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.849 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.849 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.849 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.849 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.849 "name": "Existed_Raid", 00:16:23.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.849 "strip_size_kb": 64, 00:16:23.849 "state": "configuring", 00:16:23.849 "raid_level": "raid5f", 00:16:23.849 "superblock": false, 00:16:23.849 "num_base_bdevs": 4, 00:16:23.849 "num_base_bdevs_discovered": 1, 00:16:23.849 "num_base_bdevs_operational": 4, 00:16:23.849 "base_bdevs_list": [ 00:16:23.849 { 00:16:23.849 "name": "BaseBdev1", 00:16:23.849 "uuid": "1759ea22-9d51-411a-b575-912caa15d031", 00:16:23.849 "is_configured": true, 00:16:23.849 "data_offset": 0, 00:16:23.849 "data_size": 65536 00:16:23.849 }, 00:16:23.849 { 00:16:23.849 "name": "BaseBdev2", 00:16:23.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.849 "is_configured": false, 00:16:23.849 "data_offset": 0, 00:16:23.849 "data_size": 0 00:16:23.849 }, 00:16:23.849 { 00:16:23.849 "name": "BaseBdev3", 00:16:23.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.849 "is_configured": false, 00:16:23.849 "data_offset": 0, 00:16:23.849 "data_size": 0 00:16:23.849 }, 00:16:23.849 { 00:16:23.849 "name": "BaseBdev4", 00:16:23.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.849 "is_configured": false, 00:16:23.849 "data_offset": 0, 00:16:23.849 "data_size": 0 00:16:23.849 } 00:16:23.849 ] 00:16:23.849 }' 00:16:23.849 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.849 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.415 [2024-11-15 09:35:12.584400] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:24.415 [2024-11-15 09:35:12.584476] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.415 [2024-11-15 09:35:12.596420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:24.415 [2024-11-15 09:35:12.598777] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:24.415 [2024-11-15 09:35:12.598830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:24.415 [2024-11-15 09:35:12.598842] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:24.415 [2024-11-15 09:35:12.598872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:24.415 [2024-11-15 09:35:12.598881] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:24.415 [2024-11-15 09:35:12.598890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.415 "name": "Existed_Raid", 00:16:24.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.415 "strip_size_kb": 64, 00:16:24.415 "state": "configuring", 00:16:24.415 "raid_level": "raid5f", 00:16:24.415 "superblock": false, 00:16:24.415 "num_base_bdevs": 4, 00:16:24.415 "num_base_bdevs_discovered": 1, 00:16:24.415 "num_base_bdevs_operational": 4, 00:16:24.415 "base_bdevs_list": [ 00:16:24.415 { 00:16:24.415 "name": "BaseBdev1", 00:16:24.415 "uuid": "1759ea22-9d51-411a-b575-912caa15d031", 00:16:24.415 "is_configured": true, 00:16:24.415 "data_offset": 0, 00:16:24.415 "data_size": 65536 00:16:24.415 }, 00:16:24.415 { 00:16:24.415 "name": "BaseBdev2", 00:16:24.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.415 "is_configured": false, 00:16:24.415 "data_offset": 0, 00:16:24.415 "data_size": 0 00:16:24.415 }, 00:16:24.415 { 00:16:24.415 "name": "BaseBdev3", 00:16:24.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.415 "is_configured": false, 00:16:24.415 "data_offset": 0, 00:16:24.415 "data_size": 0 00:16:24.415 }, 00:16:24.415 { 00:16:24.415 "name": "BaseBdev4", 00:16:24.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.415 "is_configured": false, 00:16:24.415 "data_offset": 0, 00:16:24.415 "data_size": 0 00:16:24.415 } 00:16:24.415 ] 00:16:24.415 }' 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.415 09:35:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.673 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:24.673 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.673 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.673 [2024-11-15 09:35:13.103386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:24.673 BaseBdev2 00:16:24.673 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.673 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:24.673 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:24.673 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:24.673 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:24.673 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:24.673 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:24.673 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:24.673 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.673 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.673 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.673 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:24.673 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.673 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.673 [ 00:16:24.673 { 00:16:24.673 "name": "BaseBdev2", 00:16:24.673 "aliases": [ 00:16:24.673 "9e8d0e77-024a-4d66-889a-3e1d39de9a16" 00:16:24.673 ], 00:16:24.673 "product_name": "Malloc disk", 00:16:24.673 "block_size": 512, 00:16:24.673 "num_blocks": 65536, 00:16:24.673 "uuid": "9e8d0e77-024a-4d66-889a-3e1d39de9a16", 00:16:24.673 "assigned_rate_limits": { 00:16:24.673 "rw_ios_per_sec": 0, 00:16:24.673 "rw_mbytes_per_sec": 0, 00:16:24.673 "r_mbytes_per_sec": 0, 00:16:24.673 "w_mbytes_per_sec": 0 00:16:24.673 }, 00:16:24.673 "claimed": true, 00:16:24.673 "claim_type": "exclusive_write", 00:16:24.673 "zoned": false, 00:16:24.673 "supported_io_types": { 00:16:24.673 "read": true, 00:16:24.673 "write": true, 00:16:24.673 "unmap": true, 00:16:24.673 "flush": true, 00:16:24.673 "reset": true, 00:16:24.673 "nvme_admin": false, 00:16:24.673 "nvme_io": false, 00:16:24.931 "nvme_io_md": false, 00:16:24.931 "write_zeroes": true, 00:16:24.931 "zcopy": true, 00:16:24.931 "get_zone_info": false, 00:16:24.931 "zone_management": false, 00:16:24.931 "zone_append": false, 00:16:24.931 "compare": false, 00:16:24.931 "compare_and_write": false, 00:16:24.931 "abort": true, 00:16:24.931 "seek_hole": false, 00:16:24.931 "seek_data": false, 00:16:24.931 "copy": true, 00:16:24.931 "nvme_iov_md": false 00:16:24.931 }, 00:16:24.931 "memory_domains": [ 00:16:24.931 { 00:16:24.931 "dma_device_id": "system", 00:16:24.931 "dma_device_type": 1 00:16:24.931 }, 00:16:24.931 { 00:16:24.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.931 "dma_device_type": 2 00:16:24.931 } 00:16:24.931 ], 00:16:24.931 "driver_specific": {} 00:16:24.931 } 00:16:24.931 ] 00:16:24.931 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.931 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:24.931 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:24.931 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:24.931 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:24.931 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.931 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.931 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.931 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.931 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.931 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.931 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.931 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.931 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.931 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.931 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.931 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.931 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.931 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.931 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.931 "name": "Existed_Raid", 00:16:24.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.931 "strip_size_kb": 64, 00:16:24.931 "state": "configuring", 00:16:24.931 "raid_level": "raid5f", 00:16:24.931 "superblock": false, 00:16:24.931 "num_base_bdevs": 4, 00:16:24.931 "num_base_bdevs_discovered": 2, 00:16:24.931 "num_base_bdevs_operational": 4, 00:16:24.931 "base_bdevs_list": [ 00:16:24.931 { 00:16:24.931 "name": "BaseBdev1", 00:16:24.931 "uuid": "1759ea22-9d51-411a-b575-912caa15d031", 00:16:24.931 "is_configured": true, 00:16:24.931 "data_offset": 0, 00:16:24.931 "data_size": 65536 00:16:24.931 }, 00:16:24.931 { 00:16:24.931 "name": "BaseBdev2", 00:16:24.931 "uuid": "9e8d0e77-024a-4d66-889a-3e1d39de9a16", 00:16:24.931 "is_configured": true, 00:16:24.931 "data_offset": 0, 00:16:24.931 "data_size": 65536 00:16:24.931 }, 00:16:24.931 { 00:16:24.931 "name": "BaseBdev3", 00:16:24.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.931 "is_configured": false, 00:16:24.931 "data_offset": 0, 00:16:24.931 "data_size": 0 00:16:24.931 }, 00:16:24.931 { 00:16:24.931 "name": "BaseBdev4", 00:16:24.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.931 "is_configured": false, 00:16:24.931 "data_offset": 0, 00:16:24.931 "data_size": 0 00:16:24.931 } 00:16:24.931 ] 00:16:24.932 }' 00:16:24.932 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.932 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.189 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:25.189 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.190 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.448 [2024-11-15 09:35:13.687905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:25.448 BaseBdev3 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.448 [ 00:16:25.448 { 00:16:25.448 "name": "BaseBdev3", 00:16:25.448 "aliases": [ 00:16:25.448 "8fedda3e-5e4f-477e-a55c-a2246868b39d" 00:16:25.448 ], 00:16:25.448 "product_name": "Malloc disk", 00:16:25.448 "block_size": 512, 00:16:25.448 "num_blocks": 65536, 00:16:25.448 "uuid": "8fedda3e-5e4f-477e-a55c-a2246868b39d", 00:16:25.448 "assigned_rate_limits": { 00:16:25.448 "rw_ios_per_sec": 0, 00:16:25.448 "rw_mbytes_per_sec": 0, 00:16:25.448 "r_mbytes_per_sec": 0, 00:16:25.448 "w_mbytes_per_sec": 0 00:16:25.448 }, 00:16:25.448 "claimed": true, 00:16:25.448 "claim_type": "exclusive_write", 00:16:25.448 "zoned": false, 00:16:25.448 "supported_io_types": { 00:16:25.448 "read": true, 00:16:25.448 "write": true, 00:16:25.448 "unmap": true, 00:16:25.448 "flush": true, 00:16:25.448 "reset": true, 00:16:25.448 "nvme_admin": false, 00:16:25.448 "nvme_io": false, 00:16:25.448 "nvme_io_md": false, 00:16:25.448 "write_zeroes": true, 00:16:25.448 "zcopy": true, 00:16:25.448 "get_zone_info": false, 00:16:25.448 "zone_management": false, 00:16:25.448 "zone_append": false, 00:16:25.448 "compare": false, 00:16:25.448 "compare_and_write": false, 00:16:25.448 "abort": true, 00:16:25.448 "seek_hole": false, 00:16:25.448 "seek_data": false, 00:16:25.448 "copy": true, 00:16:25.448 "nvme_iov_md": false 00:16:25.448 }, 00:16:25.448 "memory_domains": [ 00:16:25.448 { 00:16:25.448 "dma_device_id": "system", 00:16:25.448 "dma_device_type": 1 00:16:25.448 }, 00:16:25.448 { 00:16:25.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.448 "dma_device_type": 2 00:16:25.448 } 00:16:25.448 ], 00:16:25.448 "driver_specific": {} 00:16:25.448 } 00:16:25.448 ] 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.448 "name": "Existed_Raid", 00:16:25.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.448 "strip_size_kb": 64, 00:16:25.448 "state": "configuring", 00:16:25.448 "raid_level": "raid5f", 00:16:25.448 "superblock": false, 00:16:25.448 "num_base_bdevs": 4, 00:16:25.448 "num_base_bdevs_discovered": 3, 00:16:25.448 "num_base_bdevs_operational": 4, 00:16:25.448 "base_bdevs_list": [ 00:16:25.448 { 00:16:25.448 "name": "BaseBdev1", 00:16:25.448 "uuid": "1759ea22-9d51-411a-b575-912caa15d031", 00:16:25.448 "is_configured": true, 00:16:25.448 "data_offset": 0, 00:16:25.448 "data_size": 65536 00:16:25.448 }, 00:16:25.448 { 00:16:25.448 "name": "BaseBdev2", 00:16:25.448 "uuid": "9e8d0e77-024a-4d66-889a-3e1d39de9a16", 00:16:25.448 "is_configured": true, 00:16:25.448 "data_offset": 0, 00:16:25.448 "data_size": 65536 00:16:25.448 }, 00:16:25.448 { 00:16:25.448 "name": "BaseBdev3", 00:16:25.448 "uuid": "8fedda3e-5e4f-477e-a55c-a2246868b39d", 00:16:25.448 "is_configured": true, 00:16:25.448 "data_offset": 0, 00:16:25.448 "data_size": 65536 00:16:25.448 }, 00:16:25.448 { 00:16:25.448 "name": "BaseBdev4", 00:16:25.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.448 "is_configured": false, 00:16:25.448 "data_offset": 0, 00:16:25.448 "data_size": 0 00:16:25.448 } 00:16:25.448 ] 00:16:25.448 }' 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.448 09:35:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.013 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:26.013 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.013 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.013 [2024-11-15 09:35:14.236464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:26.013 [2024-11-15 09:35:14.236573] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:26.013 [2024-11-15 09:35:14.236585] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:26.013 [2024-11-15 09:35:14.236909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:26.013 [2024-11-15 09:35:14.244724] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:26.013 [2024-11-15 09:35:14.244755] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:26.013 [2024-11-15 09:35:14.245102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.013 BaseBdev4 00:16:26.013 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.013 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:26.013 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:16:26.013 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:26.013 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:26.013 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:26.013 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:26.013 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:26.013 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.013 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.013 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.013 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:26.013 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.013 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.013 [ 00:16:26.013 { 00:16:26.013 "name": "BaseBdev4", 00:16:26.013 "aliases": [ 00:16:26.013 "dd7703e9-4bb9-4cfd-9f3a-8663035a3e18" 00:16:26.013 ], 00:16:26.013 "product_name": "Malloc disk", 00:16:26.013 "block_size": 512, 00:16:26.014 "num_blocks": 65536, 00:16:26.014 "uuid": "dd7703e9-4bb9-4cfd-9f3a-8663035a3e18", 00:16:26.014 "assigned_rate_limits": { 00:16:26.014 "rw_ios_per_sec": 0, 00:16:26.014 "rw_mbytes_per_sec": 0, 00:16:26.014 "r_mbytes_per_sec": 0, 00:16:26.014 "w_mbytes_per_sec": 0 00:16:26.014 }, 00:16:26.014 "claimed": true, 00:16:26.014 "claim_type": "exclusive_write", 00:16:26.014 "zoned": false, 00:16:26.014 "supported_io_types": { 00:16:26.014 "read": true, 00:16:26.014 "write": true, 00:16:26.014 "unmap": true, 00:16:26.014 "flush": true, 00:16:26.014 "reset": true, 00:16:26.014 "nvme_admin": false, 00:16:26.014 "nvme_io": false, 00:16:26.014 "nvme_io_md": false, 00:16:26.014 "write_zeroes": true, 00:16:26.014 "zcopy": true, 00:16:26.014 "get_zone_info": false, 00:16:26.014 "zone_management": false, 00:16:26.014 "zone_append": false, 00:16:26.014 "compare": false, 00:16:26.014 "compare_and_write": false, 00:16:26.014 "abort": true, 00:16:26.014 "seek_hole": false, 00:16:26.014 "seek_data": false, 00:16:26.014 "copy": true, 00:16:26.014 "nvme_iov_md": false 00:16:26.014 }, 00:16:26.014 "memory_domains": [ 00:16:26.014 { 00:16:26.014 "dma_device_id": "system", 00:16:26.014 "dma_device_type": 1 00:16:26.014 }, 00:16:26.014 { 00:16:26.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.014 "dma_device_type": 2 00:16:26.014 } 00:16:26.014 ], 00:16:26.014 "driver_specific": {} 00:16:26.014 } 00:16:26.014 ] 00:16:26.014 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.014 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:26.014 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:26.014 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:26.014 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:26.014 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.014 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.014 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.014 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.014 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.014 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.014 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.014 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.014 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.014 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.014 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.014 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.014 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.014 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.014 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.014 "name": "Existed_Raid", 00:16:26.014 "uuid": "64cf95f8-a501-4d0d-ab9d-0404b1c28b05", 00:16:26.014 "strip_size_kb": 64, 00:16:26.014 "state": "online", 00:16:26.014 "raid_level": "raid5f", 00:16:26.014 "superblock": false, 00:16:26.014 "num_base_bdevs": 4, 00:16:26.014 "num_base_bdevs_discovered": 4, 00:16:26.014 "num_base_bdevs_operational": 4, 00:16:26.014 "base_bdevs_list": [ 00:16:26.014 { 00:16:26.014 "name": "BaseBdev1", 00:16:26.014 "uuid": "1759ea22-9d51-411a-b575-912caa15d031", 00:16:26.014 "is_configured": true, 00:16:26.014 "data_offset": 0, 00:16:26.014 "data_size": 65536 00:16:26.014 }, 00:16:26.014 { 00:16:26.014 "name": "BaseBdev2", 00:16:26.014 "uuid": "9e8d0e77-024a-4d66-889a-3e1d39de9a16", 00:16:26.014 "is_configured": true, 00:16:26.014 "data_offset": 0, 00:16:26.014 "data_size": 65536 00:16:26.014 }, 00:16:26.014 { 00:16:26.014 "name": "BaseBdev3", 00:16:26.014 "uuid": "8fedda3e-5e4f-477e-a55c-a2246868b39d", 00:16:26.014 "is_configured": true, 00:16:26.014 "data_offset": 0, 00:16:26.014 "data_size": 65536 00:16:26.014 }, 00:16:26.014 { 00:16:26.014 "name": "BaseBdev4", 00:16:26.014 "uuid": "dd7703e9-4bb9-4cfd-9f3a-8663035a3e18", 00:16:26.014 "is_configured": true, 00:16:26.014 "data_offset": 0, 00:16:26.014 "data_size": 65536 00:16:26.014 } 00:16:26.014 ] 00:16:26.014 }' 00:16:26.014 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.014 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.272 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:26.272 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:26.272 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:26.272 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:26.272 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:26.272 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:26.272 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:26.272 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.272 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:26.272 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.531 [2024-11-15 09:35:14.742219] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:26.531 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.531 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:26.531 "name": "Existed_Raid", 00:16:26.531 "aliases": [ 00:16:26.531 "64cf95f8-a501-4d0d-ab9d-0404b1c28b05" 00:16:26.531 ], 00:16:26.531 "product_name": "Raid Volume", 00:16:26.531 "block_size": 512, 00:16:26.531 "num_blocks": 196608, 00:16:26.531 "uuid": "64cf95f8-a501-4d0d-ab9d-0404b1c28b05", 00:16:26.531 "assigned_rate_limits": { 00:16:26.531 "rw_ios_per_sec": 0, 00:16:26.531 "rw_mbytes_per_sec": 0, 00:16:26.531 "r_mbytes_per_sec": 0, 00:16:26.531 "w_mbytes_per_sec": 0 00:16:26.531 }, 00:16:26.531 "claimed": false, 00:16:26.531 "zoned": false, 00:16:26.531 "supported_io_types": { 00:16:26.531 "read": true, 00:16:26.531 "write": true, 00:16:26.531 "unmap": false, 00:16:26.531 "flush": false, 00:16:26.531 "reset": true, 00:16:26.531 "nvme_admin": false, 00:16:26.531 "nvme_io": false, 00:16:26.531 "nvme_io_md": false, 00:16:26.531 "write_zeroes": true, 00:16:26.531 "zcopy": false, 00:16:26.531 "get_zone_info": false, 00:16:26.531 "zone_management": false, 00:16:26.531 "zone_append": false, 00:16:26.531 "compare": false, 00:16:26.531 "compare_and_write": false, 00:16:26.531 "abort": false, 00:16:26.531 "seek_hole": false, 00:16:26.531 "seek_data": false, 00:16:26.531 "copy": false, 00:16:26.531 "nvme_iov_md": false 00:16:26.531 }, 00:16:26.531 "driver_specific": { 00:16:26.531 "raid": { 00:16:26.531 "uuid": "64cf95f8-a501-4d0d-ab9d-0404b1c28b05", 00:16:26.531 "strip_size_kb": 64, 00:16:26.531 "state": "online", 00:16:26.531 "raid_level": "raid5f", 00:16:26.531 "superblock": false, 00:16:26.531 "num_base_bdevs": 4, 00:16:26.531 "num_base_bdevs_discovered": 4, 00:16:26.531 "num_base_bdevs_operational": 4, 00:16:26.531 "base_bdevs_list": [ 00:16:26.531 { 00:16:26.531 "name": "BaseBdev1", 00:16:26.531 "uuid": "1759ea22-9d51-411a-b575-912caa15d031", 00:16:26.531 "is_configured": true, 00:16:26.531 "data_offset": 0, 00:16:26.531 "data_size": 65536 00:16:26.531 }, 00:16:26.531 { 00:16:26.531 "name": "BaseBdev2", 00:16:26.531 "uuid": "9e8d0e77-024a-4d66-889a-3e1d39de9a16", 00:16:26.532 "is_configured": true, 00:16:26.532 "data_offset": 0, 00:16:26.532 "data_size": 65536 00:16:26.532 }, 00:16:26.532 { 00:16:26.532 "name": "BaseBdev3", 00:16:26.532 "uuid": "8fedda3e-5e4f-477e-a55c-a2246868b39d", 00:16:26.532 "is_configured": true, 00:16:26.532 "data_offset": 0, 00:16:26.532 "data_size": 65536 00:16:26.532 }, 00:16:26.532 { 00:16:26.532 "name": "BaseBdev4", 00:16:26.532 "uuid": "dd7703e9-4bb9-4cfd-9f3a-8663035a3e18", 00:16:26.532 "is_configured": true, 00:16:26.532 "data_offset": 0, 00:16:26.532 "data_size": 65536 00:16:26.532 } 00:16:26.532 ] 00:16:26.532 } 00:16:26.532 } 00:16:26.532 }' 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:26.532 BaseBdev2 00:16:26.532 BaseBdev3 00:16:26.532 BaseBdev4' 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.532 09:35:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.790 09:35:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.790 [2024-11-15 09:35:15.069510] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.790 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.790 "name": "Existed_Raid", 00:16:26.791 "uuid": "64cf95f8-a501-4d0d-ab9d-0404b1c28b05", 00:16:26.791 "strip_size_kb": 64, 00:16:26.791 "state": "online", 00:16:26.791 "raid_level": "raid5f", 00:16:26.791 "superblock": false, 00:16:26.791 "num_base_bdevs": 4, 00:16:26.791 "num_base_bdevs_discovered": 3, 00:16:26.791 "num_base_bdevs_operational": 3, 00:16:26.791 "base_bdevs_list": [ 00:16:26.791 { 00:16:26.791 "name": null, 00:16:26.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.791 "is_configured": false, 00:16:26.791 "data_offset": 0, 00:16:26.791 "data_size": 65536 00:16:26.791 }, 00:16:26.791 { 00:16:26.791 "name": "BaseBdev2", 00:16:26.791 "uuid": "9e8d0e77-024a-4d66-889a-3e1d39de9a16", 00:16:26.791 "is_configured": true, 00:16:26.791 "data_offset": 0, 00:16:26.791 "data_size": 65536 00:16:26.791 }, 00:16:26.791 { 00:16:26.791 "name": "BaseBdev3", 00:16:26.791 "uuid": "8fedda3e-5e4f-477e-a55c-a2246868b39d", 00:16:26.791 "is_configured": true, 00:16:26.791 "data_offset": 0, 00:16:26.791 "data_size": 65536 00:16:26.791 }, 00:16:26.791 { 00:16:26.791 "name": "BaseBdev4", 00:16:26.791 "uuid": "dd7703e9-4bb9-4cfd-9f3a-8663035a3e18", 00:16:26.791 "is_configured": true, 00:16:26.791 "data_offset": 0, 00:16:26.791 "data_size": 65536 00:16:26.791 } 00:16:26.791 ] 00:16:26.791 }' 00:16:26.791 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.791 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.359 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:27.359 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:27.359 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.359 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.359 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.359 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:27.359 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.359 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:27.359 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:27.359 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:27.359 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.359 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.359 [2024-11-15 09:35:15.604185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:27.359 [2024-11-15 09:35:15.604318] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.359 [2024-11-15 09:35:15.712145] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.359 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.359 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:27.359 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:27.359 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:27.359 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.359 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.359 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.359 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.359 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:27.359 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:27.359 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:27.359 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.359 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.359 [2024-11-15 09:35:15.760124] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:27.619 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.619 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:27.619 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:27.619 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.619 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.619 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.619 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:27.619 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.619 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:27.619 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:27.619 09:35:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:27.619 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.619 09:35:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.619 [2024-11-15 09:35:15.926563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:27.619 [2024-11-15 09:35:15.926648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:27.619 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.619 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:27.619 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:27.619 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.619 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:27.619 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.619 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.620 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.620 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:27.620 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:27.620 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:27.620 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:27.620 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:27.620 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:27.889 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.889 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.889 BaseBdev2 00:16:27.889 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.889 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:27.889 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:27.889 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:27.889 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:27.889 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:27.889 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:27.889 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:27.889 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.889 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.889 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.889 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:27.889 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.889 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.889 [ 00:16:27.889 { 00:16:27.889 "name": "BaseBdev2", 00:16:27.889 "aliases": [ 00:16:27.889 "b00b0d57-e6bf-47ee-8e1a-98472df82de2" 00:16:27.889 ], 00:16:27.889 "product_name": "Malloc disk", 00:16:27.889 "block_size": 512, 00:16:27.889 "num_blocks": 65536, 00:16:27.889 "uuid": "b00b0d57-e6bf-47ee-8e1a-98472df82de2", 00:16:27.889 "assigned_rate_limits": { 00:16:27.889 "rw_ios_per_sec": 0, 00:16:27.889 "rw_mbytes_per_sec": 0, 00:16:27.889 "r_mbytes_per_sec": 0, 00:16:27.889 "w_mbytes_per_sec": 0 00:16:27.889 }, 00:16:27.889 "claimed": false, 00:16:27.889 "zoned": false, 00:16:27.889 "supported_io_types": { 00:16:27.889 "read": true, 00:16:27.889 "write": true, 00:16:27.890 "unmap": true, 00:16:27.890 "flush": true, 00:16:27.890 "reset": true, 00:16:27.890 "nvme_admin": false, 00:16:27.890 "nvme_io": false, 00:16:27.890 "nvme_io_md": false, 00:16:27.890 "write_zeroes": true, 00:16:27.890 "zcopy": true, 00:16:27.890 "get_zone_info": false, 00:16:27.890 "zone_management": false, 00:16:27.890 "zone_append": false, 00:16:27.890 "compare": false, 00:16:27.890 "compare_and_write": false, 00:16:27.890 "abort": true, 00:16:27.890 "seek_hole": false, 00:16:27.890 "seek_data": false, 00:16:27.890 "copy": true, 00:16:27.890 "nvme_iov_md": false 00:16:27.890 }, 00:16:27.890 "memory_domains": [ 00:16:27.890 { 00:16:27.890 "dma_device_id": "system", 00:16:27.890 "dma_device_type": 1 00:16:27.890 }, 00:16:27.890 { 00:16:27.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.890 "dma_device_type": 2 00:16:27.890 } 00:16:27.890 ], 00:16:27.890 "driver_specific": {} 00:16:27.890 } 00:16:27.890 ] 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.890 BaseBdev3 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.890 [ 00:16:27.890 { 00:16:27.890 "name": "BaseBdev3", 00:16:27.890 "aliases": [ 00:16:27.890 "02e2ddc8-3a1b-4d20-b422-354d856497ff" 00:16:27.890 ], 00:16:27.890 "product_name": "Malloc disk", 00:16:27.890 "block_size": 512, 00:16:27.890 "num_blocks": 65536, 00:16:27.890 "uuid": "02e2ddc8-3a1b-4d20-b422-354d856497ff", 00:16:27.890 "assigned_rate_limits": { 00:16:27.890 "rw_ios_per_sec": 0, 00:16:27.890 "rw_mbytes_per_sec": 0, 00:16:27.890 "r_mbytes_per_sec": 0, 00:16:27.890 "w_mbytes_per_sec": 0 00:16:27.890 }, 00:16:27.890 "claimed": false, 00:16:27.890 "zoned": false, 00:16:27.890 "supported_io_types": { 00:16:27.890 "read": true, 00:16:27.890 "write": true, 00:16:27.890 "unmap": true, 00:16:27.890 "flush": true, 00:16:27.890 "reset": true, 00:16:27.890 "nvme_admin": false, 00:16:27.890 "nvme_io": false, 00:16:27.890 "nvme_io_md": false, 00:16:27.890 "write_zeroes": true, 00:16:27.890 "zcopy": true, 00:16:27.890 "get_zone_info": false, 00:16:27.890 "zone_management": false, 00:16:27.890 "zone_append": false, 00:16:27.890 "compare": false, 00:16:27.890 "compare_and_write": false, 00:16:27.890 "abort": true, 00:16:27.890 "seek_hole": false, 00:16:27.890 "seek_data": false, 00:16:27.890 "copy": true, 00:16:27.890 "nvme_iov_md": false 00:16:27.890 }, 00:16:27.890 "memory_domains": [ 00:16:27.890 { 00:16:27.890 "dma_device_id": "system", 00:16:27.890 "dma_device_type": 1 00:16:27.890 }, 00:16:27.890 { 00:16:27.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.890 "dma_device_type": 2 00:16:27.890 } 00:16:27.890 ], 00:16:27.890 "driver_specific": {} 00:16:27.890 } 00:16:27.890 ] 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.890 BaseBdev4 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.890 [ 00:16:27.890 { 00:16:27.890 "name": "BaseBdev4", 00:16:27.890 "aliases": [ 00:16:27.890 "37cb5f8d-bbe7-4360-8660-796455f74892" 00:16:27.890 ], 00:16:27.890 "product_name": "Malloc disk", 00:16:27.890 "block_size": 512, 00:16:27.890 "num_blocks": 65536, 00:16:27.890 "uuid": "37cb5f8d-bbe7-4360-8660-796455f74892", 00:16:27.890 "assigned_rate_limits": { 00:16:27.890 "rw_ios_per_sec": 0, 00:16:27.890 "rw_mbytes_per_sec": 0, 00:16:27.890 "r_mbytes_per_sec": 0, 00:16:27.890 "w_mbytes_per_sec": 0 00:16:27.890 }, 00:16:27.890 "claimed": false, 00:16:27.890 "zoned": false, 00:16:27.890 "supported_io_types": { 00:16:27.890 "read": true, 00:16:27.890 "write": true, 00:16:27.890 "unmap": true, 00:16:27.890 "flush": true, 00:16:27.890 "reset": true, 00:16:27.890 "nvme_admin": false, 00:16:27.890 "nvme_io": false, 00:16:27.890 "nvme_io_md": false, 00:16:27.890 "write_zeroes": true, 00:16:27.890 "zcopy": true, 00:16:27.890 "get_zone_info": false, 00:16:27.890 "zone_management": false, 00:16:27.890 "zone_append": false, 00:16:27.890 "compare": false, 00:16:27.890 "compare_and_write": false, 00:16:27.890 "abort": true, 00:16:27.890 "seek_hole": false, 00:16:27.890 "seek_data": false, 00:16:27.890 "copy": true, 00:16:27.890 "nvme_iov_md": false 00:16:27.890 }, 00:16:27.890 "memory_domains": [ 00:16:27.890 { 00:16:27.890 "dma_device_id": "system", 00:16:27.890 "dma_device_type": 1 00:16:27.890 }, 00:16:27.890 { 00:16:27.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.890 "dma_device_type": 2 00:16:27.890 } 00:16:27.890 ], 00:16:27.890 "driver_specific": {} 00:16:27.890 } 00:16:27.890 ] 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.890 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.151 [2024-11-15 09:35:16.352454] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:28.151 [2024-11-15 09:35:16.352559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:28.151 [2024-11-15 09:35:16.352605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:28.151 [2024-11-15 09:35:16.354749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:28.151 [2024-11-15 09:35:16.354842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:28.151 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.151 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:28.151 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.151 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.151 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.151 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.151 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:28.151 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.151 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.151 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.151 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.151 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.151 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.151 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.151 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.151 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.151 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.151 "name": "Existed_Raid", 00:16:28.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.151 "strip_size_kb": 64, 00:16:28.151 "state": "configuring", 00:16:28.151 "raid_level": "raid5f", 00:16:28.151 "superblock": false, 00:16:28.151 "num_base_bdevs": 4, 00:16:28.151 "num_base_bdevs_discovered": 3, 00:16:28.151 "num_base_bdevs_operational": 4, 00:16:28.151 "base_bdevs_list": [ 00:16:28.151 { 00:16:28.151 "name": "BaseBdev1", 00:16:28.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.151 "is_configured": false, 00:16:28.151 "data_offset": 0, 00:16:28.151 "data_size": 0 00:16:28.151 }, 00:16:28.151 { 00:16:28.151 "name": "BaseBdev2", 00:16:28.151 "uuid": "b00b0d57-e6bf-47ee-8e1a-98472df82de2", 00:16:28.151 "is_configured": true, 00:16:28.151 "data_offset": 0, 00:16:28.151 "data_size": 65536 00:16:28.151 }, 00:16:28.151 { 00:16:28.151 "name": "BaseBdev3", 00:16:28.151 "uuid": "02e2ddc8-3a1b-4d20-b422-354d856497ff", 00:16:28.151 "is_configured": true, 00:16:28.151 "data_offset": 0, 00:16:28.151 "data_size": 65536 00:16:28.151 }, 00:16:28.151 { 00:16:28.151 "name": "BaseBdev4", 00:16:28.151 "uuid": "37cb5f8d-bbe7-4360-8660-796455f74892", 00:16:28.151 "is_configured": true, 00:16:28.151 "data_offset": 0, 00:16:28.151 "data_size": 65536 00:16:28.151 } 00:16:28.151 ] 00:16:28.151 }' 00:16:28.151 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.151 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.411 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:28.411 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.411 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.411 [2024-11-15 09:35:16.847685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:28.411 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.411 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:28.411 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.411 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.411 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.411 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.411 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:28.411 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.411 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.411 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.411 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.411 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.411 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.411 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.411 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.672 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.672 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.672 "name": "Existed_Raid", 00:16:28.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.672 "strip_size_kb": 64, 00:16:28.672 "state": "configuring", 00:16:28.672 "raid_level": "raid5f", 00:16:28.672 "superblock": false, 00:16:28.672 "num_base_bdevs": 4, 00:16:28.672 "num_base_bdevs_discovered": 2, 00:16:28.672 "num_base_bdevs_operational": 4, 00:16:28.672 "base_bdevs_list": [ 00:16:28.672 { 00:16:28.672 "name": "BaseBdev1", 00:16:28.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.672 "is_configured": false, 00:16:28.672 "data_offset": 0, 00:16:28.672 "data_size": 0 00:16:28.672 }, 00:16:28.672 { 00:16:28.672 "name": null, 00:16:28.672 "uuid": "b00b0d57-e6bf-47ee-8e1a-98472df82de2", 00:16:28.672 "is_configured": false, 00:16:28.672 "data_offset": 0, 00:16:28.672 "data_size": 65536 00:16:28.672 }, 00:16:28.672 { 00:16:28.672 "name": "BaseBdev3", 00:16:28.672 "uuid": "02e2ddc8-3a1b-4d20-b422-354d856497ff", 00:16:28.672 "is_configured": true, 00:16:28.672 "data_offset": 0, 00:16:28.672 "data_size": 65536 00:16:28.672 }, 00:16:28.672 { 00:16:28.672 "name": "BaseBdev4", 00:16:28.672 "uuid": "37cb5f8d-bbe7-4360-8660-796455f74892", 00:16:28.672 "is_configured": true, 00:16:28.672 "data_offset": 0, 00:16:28.672 "data_size": 65536 00:16:28.672 } 00:16:28.672 ] 00:16:28.672 }' 00:16:28.672 09:35:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.672 09:35:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.932 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:28.932 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.932 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.932 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.932 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.932 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:28.932 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:28.932 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.932 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.932 [2024-11-15 09:35:17.381663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:28.932 BaseBdev1 00:16:28.932 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.932 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:28.932 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:28.932 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:28.932 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:28.932 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:28.932 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:28.932 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:28.932 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.932 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.932 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.932 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:28.932 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.932 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.192 [ 00:16:29.192 { 00:16:29.192 "name": "BaseBdev1", 00:16:29.192 "aliases": [ 00:16:29.192 "90bc7b9c-c306-4ef9-9cdd-63013474f0df" 00:16:29.192 ], 00:16:29.192 "product_name": "Malloc disk", 00:16:29.192 "block_size": 512, 00:16:29.192 "num_blocks": 65536, 00:16:29.192 "uuid": "90bc7b9c-c306-4ef9-9cdd-63013474f0df", 00:16:29.192 "assigned_rate_limits": { 00:16:29.192 "rw_ios_per_sec": 0, 00:16:29.192 "rw_mbytes_per_sec": 0, 00:16:29.192 "r_mbytes_per_sec": 0, 00:16:29.192 "w_mbytes_per_sec": 0 00:16:29.192 }, 00:16:29.192 "claimed": true, 00:16:29.192 "claim_type": "exclusive_write", 00:16:29.192 "zoned": false, 00:16:29.192 "supported_io_types": { 00:16:29.192 "read": true, 00:16:29.192 "write": true, 00:16:29.192 "unmap": true, 00:16:29.192 "flush": true, 00:16:29.192 "reset": true, 00:16:29.192 "nvme_admin": false, 00:16:29.192 "nvme_io": false, 00:16:29.192 "nvme_io_md": false, 00:16:29.192 "write_zeroes": true, 00:16:29.192 "zcopy": true, 00:16:29.192 "get_zone_info": false, 00:16:29.192 "zone_management": false, 00:16:29.192 "zone_append": false, 00:16:29.192 "compare": false, 00:16:29.192 "compare_and_write": false, 00:16:29.192 "abort": true, 00:16:29.192 "seek_hole": false, 00:16:29.192 "seek_data": false, 00:16:29.192 "copy": true, 00:16:29.192 "nvme_iov_md": false 00:16:29.192 }, 00:16:29.192 "memory_domains": [ 00:16:29.192 { 00:16:29.192 "dma_device_id": "system", 00:16:29.192 "dma_device_type": 1 00:16:29.192 }, 00:16:29.192 { 00:16:29.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.192 "dma_device_type": 2 00:16:29.192 } 00:16:29.192 ], 00:16:29.192 "driver_specific": {} 00:16:29.192 } 00:16:29.192 ] 00:16:29.192 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.192 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:29.192 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:29.192 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.192 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.192 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.192 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.192 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.192 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.192 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.192 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.192 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.192 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.192 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.192 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.192 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.192 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.192 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.192 "name": "Existed_Raid", 00:16:29.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.192 "strip_size_kb": 64, 00:16:29.192 "state": "configuring", 00:16:29.192 "raid_level": "raid5f", 00:16:29.192 "superblock": false, 00:16:29.192 "num_base_bdevs": 4, 00:16:29.192 "num_base_bdevs_discovered": 3, 00:16:29.192 "num_base_bdevs_operational": 4, 00:16:29.192 "base_bdevs_list": [ 00:16:29.192 { 00:16:29.192 "name": "BaseBdev1", 00:16:29.192 "uuid": "90bc7b9c-c306-4ef9-9cdd-63013474f0df", 00:16:29.192 "is_configured": true, 00:16:29.192 "data_offset": 0, 00:16:29.192 "data_size": 65536 00:16:29.192 }, 00:16:29.192 { 00:16:29.192 "name": null, 00:16:29.192 "uuid": "b00b0d57-e6bf-47ee-8e1a-98472df82de2", 00:16:29.192 "is_configured": false, 00:16:29.192 "data_offset": 0, 00:16:29.192 "data_size": 65536 00:16:29.192 }, 00:16:29.192 { 00:16:29.192 "name": "BaseBdev3", 00:16:29.192 "uuid": "02e2ddc8-3a1b-4d20-b422-354d856497ff", 00:16:29.192 "is_configured": true, 00:16:29.192 "data_offset": 0, 00:16:29.192 "data_size": 65536 00:16:29.192 }, 00:16:29.192 { 00:16:29.192 "name": "BaseBdev4", 00:16:29.192 "uuid": "37cb5f8d-bbe7-4360-8660-796455f74892", 00:16:29.192 "is_configured": true, 00:16:29.192 "data_offset": 0, 00:16:29.192 "data_size": 65536 00:16:29.192 } 00:16:29.192 ] 00:16:29.192 }' 00:16:29.192 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.192 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.452 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.452 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:29.452 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.452 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.452 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.452 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:29.452 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:29.452 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.452 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.452 [2024-11-15 09:35:17.896915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:29.452 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.452 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:29.452 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.452 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.452 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.452 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.452 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.452 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.452 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.452 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.452 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.452 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.452 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.452 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.452 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.712 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.712 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.712 "name": "Existed_Raid", 00:16:29.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.712 "strip_size_kb": 64, 00:16:29.712 "state": "configuring", 00:16:29.712 "raid_level": "raid5f", 00:16:29.712 "superblock": false, 00:16:29.712 "num_base_bdevs": 4, 00:16:29.712 "num_base_bdevs_discovered": 2, 00:16:29.712 "num_base_bdevs_operational": 4, 00:16:29.712 "base_bdevs_list": [ 00:16:29.712 { 00:16:29.712 "name": "BaseBdev1", 00:16:29.712 "uuid": "90bc7b9c-c306-4ef9-9cdd-63013474f0df", 00:16:29.712 "is_configured": true, 00:16:29.712 "data_offset": 0, 00:16:29.712 "data_size": 65536 00:16:29.712 }, 00:16:29.712 { 00:16:29.712 "name": null, 00:16:29.712 "uuid": "b00b0d57-e6bf-47ee-8e1a-98472df82de2", 00:16:29.712 "is_configured": false, 00:16:29.712 "data_offset": 0, 00:16:29.712 "data_size": 65536 00:16:29.712 }, 00:16:29.712 { 00:16:29.712 "name": null, 00:16:29.712 "uuid": "02e2ddc8-3a1b-4d20-b422-354d856497ff", 00:16:29.712 "is_configured": false, 00:16:29.712 "data_offset": 0, 00:16:29.712 "data_size": 65536 00:16:29.712 }, 00:16:29.712 { 00:16:29.712 "name": "BaseBdev4", 00:16:29.712 "uuid": "37cb5f8d-bbe7-4360-8660-796455f74892", 00:16:29.712 "is_configured": true, 00:16:29.712 "data_offset": 0, 00:16:29.712 "data_size": 65536 00:16:29.712 } 00:16:29.712 ] 00:16:29.712 }' 00:16:29.712 09:35:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.712 09:35:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.972 09:35:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.972 09:35:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.972 09:35:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.972 09:35:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:29.972 09:35:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.972 09:35:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:29.972 09:35:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:29.972 09:35:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.972 09:35:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.972 [2024-11-15 09:35:18.428009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:29.972 09:35:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.972 09:35:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:29.972 09:35:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.972 09:35:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.972 09:35:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.972 09:35:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.972 09:35:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.972 09:35:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.972 09:35:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.972 09:35:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.972 09:35:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.232 09:35:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.232 09:35:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.232 09:35:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.232 09:35:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.232 09:35:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.232 09:35:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.232 "name": "Existed_Raid", 00:16:30.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.232 "strip_size_kb": 64, 00:16:30.232 "state": "configuring", 00:16:30.232 "raid_level": "raid5f", 00:16:30.232 "superblock": false, 00:16:30.232 "num_base_bdevs": 4, 00:16:30.232 "num_base_bdevs_discovered": 3, 00:16:30.232 "num_base_bdevs_operational": 4, 00:16:30.232 "base_bdevs_list": [ 00:16:30.232 { 00:16:30.232 "name": "BaseBdev1", 00:16:30.232 "uuid": "90bc7b9c-c306-4ef9-9cdd-63013474f0df", 00:16:30.232 "is_configured": true, 00:16:30.232 "data_offset": 0, 00:16:30.232 "data_size": 65536 00:16:30.232 }, 00:16:30.232 { 00:16:30.232 "name": null, 00:16:30.232 "uuid": "b00b0d57-e6bf-47ee-8e1a-98472df82de2", 00:16:30.232 "is_configured": false, 00:16:30.232 "data_offset": 0, 00:16:30.232 "data_size": 65536 00:16:30.232 }, 00:16:30.232 { 00:16:30.232 "name": "BaseBdev3", 00:16:30.232 "uuid": "02e2ddc8-3a1b-4d20-b422-354d856497ff", 00:16:30.232 "is_configured": true, 00:16:30.232 "data_offset": 0, 00:16:30.232 "data_size": 65536 00:16:30.232 }, 00:16:30.232 { 00:16:30.232 "name": "BaseBdev4", 00:16:30.232 "uuid": "37cb5f8d-bbe7-4360-8660-796455f74892", 00:16:30.232 "is_configured": true, 00:16:30.232 "data_offset": 0, 00:16:30.232 "data_size": 65536 00:16:30.232 } 00:16:30.232 ] 00:16:30.232 }' 00:16:30.232 09:35:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.232 09:35:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.491 09:35:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.492 09:35:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:30.492 09:35:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.492 09:35:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.492 09:35:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.492 09:35:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:30.492 09:35:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:30.492 09:35:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.492 09:35:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.492 [2024-11-15 09:35:18.887275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:30.751 09:35:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.751 09:35:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:30.751 09:35:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.751 09:35:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.751 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.752 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.752 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.752 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.752 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.752 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.752 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.752 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.752 09:35:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.752 09:35:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.752 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.752 09:35:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.752 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.752 "name": "Existed_Raid", 00:16:30.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.752 "strip_size_kb": 64, 00:16:30.752 "state": "configuring", 00:16:30.752 "raid_level": "raid5f", 00:16:30.752 "superblock": false, 00:16:30.752 "num_base_bdevs": 4, 00:16:30.752 "num_base_bdevs_discovered": 2, 00:16:30.752 "num_base_bdevs_operational": 4, 00:16:30.752 "base_bdevs_list": [ 00:16:30.752 { 00:16:30.752 "name": null, 00:16:30.752 "uuid": "90bc7b9c-c306-4ef9-9cdd-63013474f0df", 00:16:30.752 "is_configured": false, 00:16:30.752 "data_offset": 0, 00:16:30.752 "data_size": 65536 00:16:30.752 }, 00:16:30.752 { 00:16:30.752 "name": null, 00:16:30.752 "uuid": "b00b0d57-e6bf-47ee-8e1a-98472df82de2", 00:16:30.752 "is_configured": false, 00:16:30.752 "data_offset": 0, 00:16:30.752 "data_size": 65536 00:16:30.752 }, 00:16:30.752 { 00:16:30.752 "name": "BaseBdev3", 00:16:30.752 "uuid": "02e2ddc8-3a1b-4d20-b422-354d856497ff", 00:16:30.752 "is_configured": true, 00:16:30.752 "data_offset": 0, 00:16:30.752 "data_size": 65536 00:16:30.752 }, 00:16:30.752 { 00:16:30.752 "name": "BaseBdev4", 00:16:30.752 "uuid": "37cb5f8d-bbe7-4360-8660-796455f74892", 00:16:30.752 "is_configured": true, 00:16:30.752 "data_offset": 0, 00:16:30.752 "data_size": 65536 00:16:30.752 } 00:16:30.752 ] 00:16:30.752 }' 00:16:30.752 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.752 09:35:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.362 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.362 09:35:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.362 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:31.362 09:35:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.362 09:35:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.362 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:31.362 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:31.362 09:35:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.362 09:35:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.362 [2024-11-15 09:35:19.540936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.362 09:35:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.363 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:31.363 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.363 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.363 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.363 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.363 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.363 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.363 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.363 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.363 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.363 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.363 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.363 09:35:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.363 09:35:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.363 09:35:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.363 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.363 "name": "Existed_Raid", 00:16:31.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.363 "strip_size_kb": 64, 00:16:31.363 "state": "configuring", 00:16:31.363 "raid_level": "raid5f", 00:16:31.363 "superblock": false, 00:16:31.363 "num_base_bdevs": 4, 00:16:31.363 "num_base_bdevs_discovered": 3, 00:16:31.363 "num_base_bdevs_operational": 4, 00:16:31.363 "base_bdevs_list": [ 00:16:31.363 { 00:16:31.363 "name": null, 00:16:31.363 "uuid": "90bc7b9c-c306-4ef9-9cdd-63013474f0df", 00:16:31.363 "is_configured": false, 00:16:31.363 "data_offset": 0, 00:16:31.363 "data_size": 65536 00:16:31.363 }, 00:16:31.363 { 00:16:31.363 "name": "BaseBdev2", 00:16:31.363 "uuid": "b00b0d57-e6bf-47ee-8e1a-98472df82de2", 00:16:31.363 "is_configured": true, 00:16:31.363 "data_offset": 0, 00:16:31.363 "data_size": 65536 00:16:31.363 }, 00:16:31.363 { 00:16:31.363 "name": "BaseBdev3", 00:16:31.363 "uuid": "02e2ddc8-3a1b-4d20-b422-354d856497ff", 00:16:31.363 "is_configured": true, 00:16:31.363 "data_offset": 0, 00:16:31.363 "data_size": 65536 00:16:31.363 }, 00:16:31.363 { 00:16:31.363 "name": "BaseBdev4", 00:16:31.363 "uuid": "37cb5f8d-bbe7-4360-8660-796455f74892", 00:16:31.363 "is_configured": true, 00:16:31.363 "data_offset": 0, 00:16:31.363 "data_size": 65536 00:16:31.363 } 00:16:31.363 ] 00:16:31.363 }' 00:16:31.363 09:35:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.363 09:35:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.637 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:31.637 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.637 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.637 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.637 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.637 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:31.637 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:31.637 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.637 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.637 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.637 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.637 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 90bc7b9c-c306-4ef9-9cdd-63013474f0df 00:16:31.637 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.637 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.896 [2024-11-15 09:35:20.135513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:31.896 [2024-11-15 09:35:20.135690] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:31.896 [2024-11-15 09:35:20.135716] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:31.896 [2024-11-15 09:35:20.136056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:31.896 [2024-11-15 09:35:20.143104] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:31.896 [2024-11-15 09:35:20.143168] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:31.896 [2024-11-15 09:35:20.143527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.896 NewBaseBdev 00:16:31.896 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.896 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:31.896 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:16:31.896 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:31.896 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:31.896 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:31.896 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:31.896 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:31.896 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.896 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.896 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.896 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:31.896 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.896 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.896 [ 00:16:31.896 { 00:16:31.896 "name": "NewBaseBdev", 00:16:31.896 "aliases": [ 00:16:31.896 "90bc7b9c-c306-4ef9-9cdd-63013474f0df" 00:16:31.896 ], 00:16:31.896 "product_name": "Malloc disk", 00:16:31.896 "block_size": 512, 00:16:31.896 "num_blocks": 65536, 00:16:31.896 "uuid": "90bc7b9c-c306-4ef9-9cdd-63013474f0df", 00:16:31.896 "assigned_rate_limits": { 00:16:31.896 "rw_ios_per_sec": 0, 00:16:31.896 "rw_mbytes_per_sec": 0, 00:16:31.896 "r_mbytes_per_sec": 0, 00:16:31.896 "w_mbytes_per_sec": 0 00:16:31.896 }, 00:16:31.896 "claimed": true, 00:16:31.896 "claim_type": "exclusive_write", 00:16:31.896 "zoned": false, 00:16:31.896 "supported_io_types": { 00:16:31.896 "read": true, 00:16:31.896 "write": true, 00:16:31.896 "unmap": true, 00:16:31.896 "flush": true, 00:16:31.896 "reset": true, 00:16:31.896 "nvme_admin": false, 00:16:31.896 "nvme_io": false, 00:16:31.896 "nvme_io_md": false, 00:16:31.896 "write_zeroes": true, 00:16:31.896 "zcopy": true, 00:16:31.896 "get_zone_info": false, 00:16:31.896 "zone_management": false, 00:16:31.896 "zone_append": false, 00:16:31.896 "compare": false, 00:16:31.896 "compare_and_write": false, 00:16:31.896 "abort": true, 00:16:31.896 "seek_hole": false, 00:16:31.896 "seek_data": false, 00:16:31.896 "copy": true, 00:16:31.896 "nvme_iov_md": false 00:16:31.896 }, 00:16:31.896 "memory_domains": [ 00:16:31.896 { 00:16:31.896 "dma_device_id": "system", 00:16:31.896 "dma_device_type": 1 00:16:31.896 }, 00:16:31.896 { 00:16:31.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.896 "dma_device_type": 2 00:16:31.896 } 00:16:31.896 ], 00:16:31.896 "driver_specific": {} 00:16:31.896 } 00:16:31.896 ] 00:16:31.896 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.896 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:31.896 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:31.896 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.897 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.897 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.897 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.897 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.897 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.897 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.897 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.897 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.897 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.897 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.897 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.897 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.897 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.897 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.897 "name": "Existed_Raid", 00:16:31.897 "uuid": "24c07568-c0da-4114-ad06-4b3d79321b41", 00:16:31.897 "strip_size_kb": 64, 00:16:31.897 "state": "online", 00:16:31.897 "raid_level": "raid5f", 00:16:31.897 "superblock": false, 00:16:31.897 "num_base_bdevs": 4, 00:16:31.897 "num_base_bdevs_discovered": 4, 00:16:31.897 "num_base_bdevs_operational": 4, 00:16:31.897 "base_bdevs_list": [ 00:16:31.897 { 00:16:31.897 "name": "NewBaseBdev", 00:16:31.897 "uuid": "90bc7b9c-c306-4ef9-9cdd-63013474f0df", 00:16:31.897 "is_configured": true, 00:16:31.897 "data_offset": 0, 00:16:31.897 "data_size": 65536 00:16:31.897 }, 00:16:31.897 { 00:16:31.897 "name": "BaseBdev2", 00:16:31.897 "uuid": "b00b0d57-e6bf-47ee-8e1a-98472df82de2", 00:16:31.897 "is_configured": true, 00:16:31.897 "data_offset": 0, 00:16:31.897 "data_size": 65536 00:16:31.897 }, 00:16:31.897 { 00:16:31.897 "name": "BaseBdev3", 00:16:31.897 "uuid": "02e2ddc8-3a1b-4d20-b422-354d856497ff", 00:16:31.897 "is_configured": true, 00:16:31.897 "data_offset": 0, 00:16:31.897 "data_size": 65536 00:16:31.897 }, 00:16:31.897 { 00:16:31.897 "name": "BaseBdev4", 00:16:31.897 "uuid": "37cb5f8d-bbe7-4360-8660-796455f74892", 00:16:31.897 "is_configured": true, 00:16:31.897 "data_offset": 0, 00:16:31.897 "data_size": 65536 00:16:31.897 } 00:16:31.897 ] 00:16:31.897 }' 00:16:31.897 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.897 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.466 [2024-11-15 09:35:20.668773] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:32.466 "name": "Existed_Raid", 00:16:32.466 "aliases": [ 00:16:32.466 "24c07568-c0da-4114-ad06-4b3d79321b41" 00:16:32.466 ], 00:16:32.466 "product_name": "Raid Volume", 00:16:32.466 "block_size": 512, 00:16:32.466 "num_blocks": 196608, 00:16:32.466 "uuid": "24c07568-c0da-4114-ad06-4b3d79321b41", 00:16:32.466 "assigned_rate_limits": { 00:16:32.466 "rw_ios_per_sec": 0, 00:16:32.466 "rw_mbytes_per_sec": 0, 00:16:32.466 "r_mbytes_per_sec": 0, 00:16:32.466 "w_mbytes_per_sec": 0 00:16:32.466 }, 00:16:32.466 "claimed": false, 00:16:32.466 "zoned": false, 00:16:32.466 "supported_io_types": { 00:16:32.466 "read": true, 00:16:32.466 "write": true, 00:16:32.466 "unmap": false, 00:16:32.466 "flush": false, 00:16:32.466 "reset": true, 00:16:32.466 "nvme_admin": false, 00:16:32.466 "nvme_io": false, 00:16:32.466 "nvme_io_md": false, 00:16:32.466 "write_zeroes": true, 00:16:32.466 "zcopy": false, 00:16:32.466 "get_zone_info": false, 00:16:32.466 "zone_management": false, 00:16:32.466 "zone_append": false, 00:16:32.466 "compare": false, 00:16:32.466 "compare_and_write": false, 00:16:32.466 "abort": false, 00:16:32.466 "seek_hole": false, 00:16:32.466 "seek_data": false, 00:16:32.466 "copy": false, 00:16:32.466 "nvme_iov_md": false 00:16:32.466 }, 00:16:32.466 "driver_specific": { 00:16:32.466 "raid": { 00:16:32.466 "uuid": "24c07568-c0da-4114-ad06-4b3d79321b41", 00:16:32.466 "strip_size_kb": 64, 00:16:32.466 "state": "online", 00:16:32.466 "raid_level": "raid5f", 00:16:32.466 "superblock": false, 00:16:32.466 "num_base_bdevs": 4, 00:16:32.466 "num_base_bdevs_discovered": 4, 00:16:32.466 "num_base_bdevs_operational": 4, 00:16:32.466 "base_bdevs_list": [ 00:16:32.466 { 00:16:32.466 "name": "NewBaseBdev", 00:16:32.466 "uuid": "90bc7b9c-c306-4ef9-9cdd-63013474f0df", 00:16:32.466 "is_configured": true, 00:16:32.466 "data_offset": 0, 00:16:32.466 "data_size": 65536 00:16:32.466 }, 00:16:32.466 { 00:16:32.466 "name": "BaseBdev2", 00:16:32.466 "uuid": "b00b0d57-e6bf-47ee-8e1a-98472df82de2", 00:16:32.466 "is_configured": true, 00:16:32.466 "data_offset": 0, 00:16:32.466 "data_size": 65536 00:16:32.466 }, 00:16:32.466 { 00:16:32.466 "name": "BaseBdev3", 00:16:32.466 "uuid": "02e2ddc8-3a1b-4d20-b422-354d856497ff", 00:16:32.466 "is_configured": true, 00:16:32.466 "data_offset": 0, 00:16:32.466 "data_size": 65536 00:16:32.466 }, 00:16:32.466 { 00:16:32.466 "name": "BaseBdev4", 00:16:32.466 "uuid": "37cb5f8d-bbe7-4360-8660-796455f74892", 00:16:32.466 "is_configured": true, 00:16:32.466 "data_offset": 0, 00:16:32.466 "data_size": 65536 00:16:32.466 } 00:16:32.466 ] 00:16:32.466 } 00:16:32.466 } 00:16:32.466 }' 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:32.466 BaseBdev2 00:16:32.466 BaseBdev3 00:16:32.466 BaseBdev4' 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.466 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.726 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.726 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.726 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.726 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.726 09:35:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:32.726 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.726 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.726 09:35:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.726 09:35:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:32.726 09:35:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:32.726 09:35:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:32.726 09:35:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.726 09:35:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.726 [2024-11-15 09:35:21.011968] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:32.726 [2024-11-15 09:35:21.012010] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.726 [2024-11-15 09:35:21.012118] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.726 [2024-11-15 09:35:21.012446] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.726 [2024-11-15 09:35:21.012458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:32.726 09:35:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.726 09:35:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83194 00:16:32.726 09:35:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 83194 ']' 00:16:32.726 09:35:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 83194 00:16:32.726 09:35:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:16:32.726 09:35:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:32.726 09:35:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83194 00:16:32.726 09:35:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:32.726 09:35:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:32.726 09:35:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83194' 00:16:32.726 killing process with pid 83194 00:16:32.726 09:35:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 83194 00:16:32.726 [2024-11-15 09:35:21.049025] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:32.726 09:35:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 83194 00:16:33.295 [2024-11-15 09:35:21.487016] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:34.674 09:35:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:34.674 00:16:34.674 real 0m12.120s 00:16:34.674 user 0m18.904s 00:16:34.674 sys 0m2.364s 00:16:34.674 09:35:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:34.674 ************************************ 00:16:34.674 END TEST raid5f_state_function_test 00:16:34.674 ************************************ 00:16:34.674 09:35:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.674 09:35:22 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:34.674 09:35:22 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:34.675 09:35:22 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:34.675 09:35:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:34.675 ************************************ 00:16:34.675 START TEST raid5f_state_function_test_sb 00:16:34.675 ************************************ 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 true 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83866 00:16:34.675 Process raid pid: 83866 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83866' 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83866 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 83866 ']' 00:16:34.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:34.675 09:35:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.675 [2024-11-15 09:35:22.908411] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:16:34.675 [2024-11-15 09:35:22.908548] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.675 [2024-11-15 09:35:23.088479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.934 [2024-11-15 09:35:23.233668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.193 [2024-11-15 09:35:23.480998] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.193 [2024-11-15 09:35:23.481054] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.453 09:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:35.453 09:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:16:35.453 09:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:35.453 09:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.453 09:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.453 [2024-11-15 09:35:23.757804] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:35.453 [2024-11-15 09:35:23.758001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:35.453 [2024-11-15 09:35:23.758018] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:35.453 [2024-11-15 09:35:23.758030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:35.453 [2024-11-15 09:35:23.758036] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:35.453 [2024-11-15 09:35:23.758046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:35.453 [2024-11-15 09:35:23.758053] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:35.453 [2024-11-15 09:35:23.758063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:35.453 09:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.453 09:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:35.453 09:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.453 09:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.453 09:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.453 09:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.453 09:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:35.453 09:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.453 09:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.453 09:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.453 09:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.453 09:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.453 09:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.453 09:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.453 09:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.453 09:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.453 09:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.453 "name": "Existed_Raid", 00:16:35.453 "uuid": "49ace881-3d8f-4e8f-8a18-c819bbc15f52", 00:16:35.453 "strip_size_kb": 64, 00:16:35.453 "state": "configuring", 00:16:35.453 "raid_level": "raid5f", 00:16:35.453 "superblock": true, 00:16:35.453 "num_base_bdevs": 4, 00:16:35.453 "num_base_bdevs_discovered": 0, 00:16:35.453 "num_base_bdevs_operational": 4, 00:16:35.453 "base_bdevs_list": [ 00:16:35.453 { 00:16:35.453 "name": "BaseBdev1", 00:16:35.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.453 "is_configured": false, 00:16:35.453 "data_offset": 0, 00:16:35.453 "data_size": 0 00:16:35.453 }, 00:16:35.453 { 00:16:35.453 "name": "BaseBdev2", 00:16:35.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.453 "is_configured": false, 00:16:35.453 "data_offset": 0, 00:16:35.453 "data_size": 0 00:16:35.453 }, 00:16:35.453 { 00:16:35.453 "name": "BaseBdev3", 00:16:35.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.453 "is_configured": false, 00:16:35.453 "data_offset": 0, 00:16:35.453 "data_size": 0 00:16:35.453 }, 00:16:35.453 { 00:16:35.453 "name": "BaseBdev4", 00:16:35.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.453 "is_configured": false, 00:16:35.453 "data_offset": 0, 00:16:35.453 "data_size": 0 00:16:35.453 } 00:16:35.453 ] 00:16:35.453 }' 00:16:35.453 09:35:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.453 09:35:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.022 [2024-11-15 09:35:24.248864] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:36.022 [2024-11-15 09:35:24.248987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.022 [2024-11-15 09:35:24.260815] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:36.022 [2024-11-15 09:35:24.260932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:36.022 [2024-11-15 09:35:24.260962] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:36.022 [2024-11-15 09:35:24.260987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:36.022 [2024-11-15 09:35:24.261006] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:36.022 [2024-11-15 09:35:24.261028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:36.022 [2024-11-15 09:35:24.261046] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:36.022 [2024-11-15 09:35:24.261068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.022 [2024-11-15 09:35:24.316674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:36.022 BaseBdev1 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.022 [ 00:16:36.022 { 00:16:36.022 "name": "BaseBdev1", 00:16:36.022 "aliases": [ 00:16:36.022 "a354d3d3-cb07-4b39-b727-bf75d1876ec7" 00:16:36.022 ], 00:16:36.022 "product_name": "Malloc disk", 00:16:36.022 "block_size": 512, 00:16:36.022 "num_blocks": 65536, 00:16:36.022 "uuid": "a354d3d3-cb07-4b39-b727-bf75d1876ec7", 00:16:36.022 "assigned_rate_limits": { 00:16:36.022 "rw_ios_per_sec": 0, 00:16:36.022 "rw_mbytes_per_sec": 0, 00:16:36.022 "r_mbytes_per_sec": 0, 00:16:36.022 "w_mbytes_per_sec": 0 00:16:36.022 }, 00:16:36.022 "claimed": true, 00:16:36.022 "claim_type": "exclusive_write", 00:16:36.022 "zoned": false, 00:16:36.022 "supported_io_types": { 00:16:36.022 "read": true, 00:16:36.022 "write": true, 00:16:36.022 "unmap": true, 00:16:36.022 "flush": true, 00:16:36.022 "reset": true, 00:16:36.022 "nvme_admin": false, 00:16:36.022 "nvme_io": false, 00:16:36.022 "nvme_io_md": false, 00:16:36.022 "write_zeroes": true, 00:16:36.022 "zcopy": true, 00:16:36.022 "get_zone_info": false, 00:16:36.022 "zone_management": false, 00:16:36.022 "zone_append": false, 00:16:36.022 "compare": false, 00:16:36.022 "compare_and_write": false, 00:16:36.022 "abort": true, 00:16:36.022 "seek_hole": false, 00:16:36.022 "seek_data": false, 00:16:36.022 "copy": true, 00:16:36.022 "nvme_iov_md": false 00:16:36.022 }, 00:16:36.022 "memory_domains": [ 00:16:36.022 { 00:16:36.022 "dma_device_id": "system", 00:16:36.022 "dma_device_type": 1 00:16:36.022 }, 00:16:36.022 { 00:16:36.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.022 "dma_device_type": 2 00:16:36.022 } 00:16:36.022 ], 00:16:36.022 "driver_specific": {} 00:16:36.022 } 00:16:36.022 ] 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.022 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.023 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.023 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.023 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.023 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.023 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.023 "name": "Existed_Raid", 00:16:36.023 "uuid": "0b5080cf-92ae-40f4-b9b8-ae90e446b617", 00:16:36.023 "strip_size_kb": 64, 00:16:36.023 "state": "configuring", 00:16:36.023 "raid_level": "raid5f", 00:16:36.023 "superblock": true, 00:16:36.023 "num_base_bdevs": 4, 00:16:36.023 "num_base_bdevs_discovered": 1, 00:16:36.023 "num_base_bdevs_operational": 4, 00:16:36.023 "base_bdevs_list": [ 00:16:36.023 { 00:16:36.023 "name": "BaseBdev1", 00:16:36.023 "uuid": "a354d3d3-cb07-4b39-b727-bf75d1876ec7", 00:16:36.023 "is_configured": true, 00:16:36.023 "data_offset": 2048, 00:16:36.023 "data_size": 63488 00:16:36.023 }, 00:16:36.023 { 00:16:36.023 "name": "BaseBdev2", 00:16:36.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.023 "is_configured": false, 00:16:36.023 "data_offset": 0, 00:16:36.023 "data_size": 0 00:16:36.023 }, 00:16:36.023 { 00:16:36.023 "name": "BaseBdev3", 00:16:36.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.023 "is_configured": false, 00:16:36.023 "data_offset": 0, 00:16:36.023 "data_size": 0 00:16:36.023 }, 00:16:36.023 { 00:16:36.023 "name": "BaseBdev4", 00:16:36.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.023 "is_configured": false, 00:16:36.023 "data_offset": 0, 00:16:36.023 "data_size": 0 00:16:36.023 } 00:16:36.023 ] 00:16:36.023 }' 00:16:36.023 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.023 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.590 [2024-11-15 09:35:24.839868] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:36.590 [2024-11-15 09:35:24.840017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.590 [2024-11-15 09:35:24.847937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:36.590 [2024-11-15 09:35:24.850260] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:36.590 [2024-11-15 09:35:24.850351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:36.590 [2024-11-15 09:35:24.850400] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:36.590 [2024-11-15 09:35:24.850426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:36.590 [2024-11-15 09:35:24.850446] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:36.590 [2024-11-15 09:35:24.850467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.590 "name": "Existed_Raid", 00:16:36.590 "uuid": "71cd159b-cf78-41e5-9914-3fb83f83ec45", 00:16:36.590 "strip_size_kb": 64, 00:16:36.590 "state": "configuring", 00:16:36.590 "raid_level": "raid5f", 00:16:36.590 "superblock": true, 00:16:36.590 "num_base_bdevs": 4, 00:16:36.590 "num_base_bdevs_discovered": 1, 00:16:36.590 "num_base_bdevs_operational": 4, 00:16:36.590 "base_bdevs_list": [ 00:16:36.590 { 00:16:36.590 "name": "BaseBdev1", 00:16:36.590 "uuid": "a354d3d3-cb07-4b39-b727-bf75d1876ec7", 00:16:36.590 "is_configured": true, 00:16:36.590 "data_offset": 2048, 00:16:36.590 "data_size": 63488 00:16:36.590 }, 00:16:36.590 { 00:16:36.590 "name": "BaseBdev2", 00:16:36.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.590 "is_configured": false, 00:16:36.590 "data_offset": 0, 00:16:36.590 "data_size": 0 00:16:36.590 }, 00:16:36.590 { 00:16:36.590 "name": "BaseBdev3", 00:16:36.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.590 "is_configured": false, 00:16:36.590 "data_offset": 0, 00:16:36.590 "data_size": 0 00:16:36.590 }, 00:16:36.590 { 00:16:36.590 "name": "BaseBdev4", 00:16:36.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.590 "is_configured": false, 00:16:36.590 "data_offset": 0, 00:16:36.590 "data_size": 0 00:16:36.590 } 00:16:36.590 ] 00:16:36.590 }' 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.590 09:35:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.849 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:36.849 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.849 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.109 [2024-11-15 09:35:25.330879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.109 BaseBdev2 00:16:37.109 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.109 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:37.109 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:37.109 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:37.109 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:37.109 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:37.109 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:37.109 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:37.109 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.109 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.109 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.109 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:37.109 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.109 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.109 [ 00:16:37.109 { 00:16:37.109 "name": "BaseBdev2", 00:16:37.109 "aliases": [ 00:16:37.109 "2b9fac9a-b810-4dc5-bdbd-c3b00dc4ac9e" 00:16:37.109 ], 00:16:37.109 "product_name": "Malloc disk", 00:16:37.109 "block_size": 512, 00:16:37.109 "num_blocks": 65536, 00:16:37.109 "uuid": "2b9fac9a-b810-4dc5-bdbd-c3b00dc4ac9e", 00:16:37.109 "assigned_rate_limits": { 00:16:37.109 "rw_ios_per_sec": 0, 00:16:37.109 "rw_mbytes_per_sec": 0, 00:16:37.109 "r_mbytes_per_sec": 0, 00:16:37.109 "w_mbytes_per_sec": 0 00:16:37.109 }, 00:16:37.109 "claimed": true, 00:16:37.110 "claim_type": "exclusive_write", 00:16:37.110 "zoned": false, 00:16:37.110 "supported_io_types": { 00:16:37.110 "read": true, 00:16:37.110 "write": true, 00:16:37.110 "unmap": true, 00:16:37.110 "flush": true, 00:16:37.110 "reset": true, 00:16:37.110 "nvme_admin": false, 00:16:37.110 "nvme_io": false, 00:16:37.110 "nvme_io_md": false, 00:16:37.110 "write_zeroes": true, 00:16:37.110 "zcopy": true, 00:16:37.110 "get_zone_info": false, 00:16:37.110 "zone_management": false, 00:16:37.110 "zone_append": false, 00:16:37.110 "compare": false, 00:16:37.110 "compare_and_write": false, 00:16:37.110 "abort": true, 00:16:37.110 "seek_hole": false, 00:16:37.110 "seek_data": false, 00:16:37.110 "copy": true, 00:16:37.110 "nvme_iov_md": false 00:16:37.110 }, 00:16:37.110 "memory_domains": [ 00:16:37.110 { 00:16:37.110 "dma_device_id": "system", 00:16:37.110 "dma_device_type": 1 00:16:37.110 }, 00:16:37.110 { 00:16:37.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.110 "dma_device_type": 2 00:16:37.110 } 00:16:37.110 ], 00:16:37.110 "driver_specific": {} 00:16:37.110 } 00:16:37.110 ] 00:16:37.110 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.110 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:37.110 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:37.110 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:37.110 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:37.110 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.110 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.110 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.110 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.110 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.110 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.110 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.110 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.110 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.110 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.110 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.110 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.110 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.110 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.110 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.110 "name": "Existed_Raid", 00:16:37.110 "uuid": "71cd159b-cf78-41e5-9914-3fb83f83ec45", 00:16:37.110 "strip_size_kb": 64, 00:16:37.110 "state": "configuring", 00:16:37.110 "raid_level": "raid5f", 00:16:37.110 "superblock": true, 00:16:37.110 "num_base_bdevs": 4, 00:16:37.110 "num_base_bdevs_discovered": 2, 00:16:37.110 "num_base_bdevs_operational": 4, 00:16:37.110 "base_bdevs_list": [ 00:16:37.110 { 00:16:37.110 "name": "BaseBdev1", 00:16:37.110 "uuid": "a354d3d3-cb07-4b39-b727-bf75d1876ec7", 00:16:37.110 "is_configured": true, 00:16:37.110 "data_offset": 2048, 00:16:37.110 "data_size": 63488 00:16:37.110 }, 00:16:37.110 { 00:16:37.110 "name": "BaseBdev2", 00:16:37.110 "uuid": "2b9fac9a-b810-4dc5-bdbd-c3b00dc4ac9e", 00:16:37.110 "is_configured": true, 00:16:37.110 "data_offset": 2048, 00:16:37.110 "data_size": 63488 00:16:37.110 }, 00:16:37.110 { 00:16:37.110 "name": "BaseBdev3", 00:16:37.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.110 "is_configured": false, 00:16:37.110 "data_offset": 0, 00:16:37.110 "data_size": 0 00:16:37.110 }, 00:16:37.110 { 00:16:37.110 "name": "BaseBdev4", 00:16:37.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.110 "is_configured": false, 00:16:37.110 "data_offset": 0, 00:16:37.110 "data_size": 0 00:16:37.110 } 00:16:37.110 ] 00:16:37.110 }' 00:16:37.110 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.110 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.679 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:37.679 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.679 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.679 [2024-11-15 09:35:25.893468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:37.679 BaseBdev3 00:16:37.679 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.679 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:37.679 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:37.679 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:37.679 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:37.679 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.680 [ 00:16:37.680 { 00:16:37.680 "name": "BaseBdev3", 00:16:37.680 "aliases": [ 00:16:37.680 "3c0557b9-1656-44d1-af51-f585b62098b7" 00:16:37.680 ], 00:16:37.680 "product_name": "Malloc disk", 00:16:37.680 "block_size": 512, 00:16:37.680 "num_blocks": 65536, 00:16:37.680 "uuid": "3c0557b9-1656-44d1-af51-f585b62098b7", 00:16:37.680 "assigned_rate_limits": { 00:16:37.680 "rw_ios_per_sec": 0, 00:16:37.680 "rw_mbytes_per_sec": 0, 00:16:37.680 "r_mbytes_per_sec": 0, 00:16:37.680 "w_mbytes_per_sec": 0 00:16:37.680 }, 00:16:37.680 "claimed": true, 00:16:37.680 "claim_type": "exclusive_write", 00:16:37.680 "zoned": false, 00:16:37.680 "supported_io_types": { 00:16:37.680 "read": true, 00:16:37.680 "write": true, 00:16:37.680 "unmap": true, 00:16:37.680 "flush": true, 00:16:37.680 "reset": true, 00:16:37.680 "nvme_admin": false, 00:16:37.680 "nvme_io": false, 00:16:37.680 "nvme_io_md": false, 00:16:37.680 "write_zeroes": true, 00:16:37.680 "zcopy": true, 00:16:37.680 "get_zone_info": false, 00:16:37.680 "zone_management": false, 00:16:37.680 "zone_append": false, 00:16:37.680 "compare": false, 00:16:37.680 "compare_and_write": false, 00:16:37.680 "abort": true, 00:16:37.680 "seek_hole": false, 00:16:37.680 "seek_data": false, 00:16:37.680 "copy": true, 00:16:37.680 "nvme_iov_md": false 00:16:37.680 }, 00:16:37.680 "memory_domains": [ 00:16:37.680 { 00:16:37.680 "dma_device_id": "system", 00:16:37.680 "dma_device_type": 1 00:16:37.680 }, 00:16:37.680 { 00:16:37.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.680 "dma_device_type": 2 00:16:37.680 } 00:16:37.680 ], 00:16:37.680 "driver_specific": {} 00:16:37.680 } 00:16:37.680 ] 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.680 "name": "Existed_Raid", 00:16:37.680 "uuid": "71cd159b-cf78-41e5-9914-3fb83f83ec45", 00:16:37.680 "strip_size_kb": 64, 00:16:37.680 "state": "configuring", 00:16:37.680 "raid_level": "raid5f", 00:16:37.680 "superblock": true, 00:16:37.680 "num_base_bdevs": 4, 00:16:37.680 "num_base_bdevs_discovered": 3, 00:16:37.680 "num_base_bdevs_operational": 4, 00:16:37.680 "base_bdevs_list": [ 00:16:37.680 { 00:16:37.680 "name": "BaseBdev1", 00:16:37.680 "uuid": "a354d3d3-cb07-4b39-b727-bf75d1876ec7", 00:16:37.680 "is_configured": true, 00:16:37.680 "data_offset": 2048, 00:16:37.680 "data_size": 63488 00:16:37.680 }, 00:16:37.680 { 00:16:37.680 "name": "BaseBdev2", 00:16:37.680 "uuid": "2b9fac9a-b810-4dc5-bdbd-c3b00dc4ac9e", 00:16:37.680 "is_configured": true, 00:16:37.680 "data_offset": 2048, 00:16:37.680 "data_size": 63488 00:16:37.680 }, 00:16:37.680 { 00:16:37.680 "name": "BaseBdev3", 00:16:37.680 "uuid": "3c0557b9-1656-44d1-af51-f585b62098b7", 00:16:37.680 "is_configured": true, 00:16:37.680 "data_offset": 2048, 00:16:37.680 "data_size": 63488 00:16:37.680 }, 00:16:37.680 { 00:16:37.680 "name": "BaseBdev4", 00:16:37.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.680 "is_configured": false, 00:16:37.680 "data_offset": 0, 00:16:37.680 "data_size": 0 00:16:37.680 } 00:16:37.680 ] 00:16:37.680 }' 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.680 09:35:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.940 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:37.940 09:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.940 09:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.940 BaseBdev4 00:16:37.940 [2024-11-15 09:35:26.400275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:37.940 [2024-11-15 09:35:26.400612] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:37.940 [2024-11-15 09:35:26.400629] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:37.940 [2024-11-15 09:35:26.400949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:37.940 09:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.940 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:37.940 09:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:16:37.940 09:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:37.940 09:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:37.940 09:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:37.940 09:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:37.940 09:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:37.940 09:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.940 09:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.199 [2024-11-15 09:35:26.408481] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:38.199 [2024-11-15 09:35:26.408559] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:38.199 [2024-11-15 09:35:26.408925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.199 09:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.199 09:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:38.199 09:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.199 09:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.199 [ 00:16:38.199 { 00:16:38.199 "name": "BaseBdev4", 00:16:38.199 "aliases": [ 00:16:38.199 "7f27ed31-7c6d-4554-b1f8-bd221ea013ee" 00:16:38.199 ], 00:16:38.199 "product_name": "Malloc disk", 00:16:38.199 "block_size": 512, 00:16:38.199 "num_blocks": 65536, 00:16:38.199 "uuid": "7f27ed31-7c6d-4554-b1f8-bd221ea013ee", 00:16:38.199 "assigned_rate_limits": { 00:16:38.200 "rw_ios_per_sec": 0, 00:16:38.200 "rw_mbytes_per_sec": 0, 00:16:38.200 "r_mbytes_per_sec": 0, 00:16:38.200 "w_mbytes_per_sec": 0 00:16:38.200 }, 00:16:38.200 "claimed": true, 00:16:38.200 "claim_type": "exclusive_write", 00:16:38.200 "zoned": false, 00:16:38.200 "supported_io_types": { 00:16:38.200 "read": true, 00:16:38.200 "write": true, 00:16:38.200 "unmap": true, 00:16:38.200 "flush": true, 00:16:38.200 "reset": true, 00:16:38.200 "nvme_admin": false, 00:16:38.200 "nvme_io": false, 00:16:38.200 "nvme_io_md": false, 00:16:38.200 "write_zeroes": true, 00:16:38.200 "zcopy": true, 00:16:38.200 "get_zone_info": false, 00:16:38.200 "zone_management": false, 00:16:38.200 "zone_append": false, 00:16:38.200 "compare": false, 00:16:38.200 "compare_and_write": false, 00:16:38.200 "abort": true, 00:16:38.200 "seek_hole": false, 00:16:38.200 "seek_data": false, 00:16:38.200 "copy": true, 00:16:38.200 "nvme_iov_md": false 00:16:38.200 }, 00:16:38.200 "memory_domains": [ 00:16:38.200 { 00:16:38.200 "dma_device_id": "system", 00:16:38.200 "dma_device_type": 1 00:16:38.200 }, 00:16:38.200 { 00:16:38.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.200 "dma_device_type": 2 00:16:38.200 } 00:16:38.200 ], 00:16:38.200 "driver_specific": {} 00:16:38.200 } 00:16:38.200 ] 00:16:38.200 09:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.200 09:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:38.200 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:38.200 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:38.200 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:38.200 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.200 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.200 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.200 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.200 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.200 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.200 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.200 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.200 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.200 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.200 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.200 09:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.200 09:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.200 09:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.200 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.200 "name": "Existed_Raid", 00:16:38.200 "uuid": "71cd159b-cf78-41e5-9914-3fb83f83ec45", 00:16:38.200 "strip_size_kb": 64, 00:16:38.200 "state": "online", 00:16:38.200 "raid_level": "raid5f", 00:16:38.200 "superblock": true, 00:16:38.200 "num_base_bdevs": 4, 00:16:38.200 "num_base_bdevs_discovered": 4, 00:16:38.200 "num_base_bdevs_operational": 4, 00:16:38.200 "base_bdevs_list": [ 00:16:38.200 { 00:16:38.200 "name": "BaseBdev1", 00:16:38.200 "uuid": "a354d3d3-cb07-4b39-b727-bf75d1876ec7", 00:16:38.200 "is_configured": true, 00:16:38.200 "data_offset": 2048, 00:16:38.200 "data_size": 63488 00:16:38.200 }, 00:16:38.200 { 00:16:38.200 "name": "BaseBdev2", 00:16:38.200 "uuid": "2b9fac9a-b810-4dc5-bdbd-c3b00dc4ac9e", 00:16:38.200 "is_configured": true, 00:16:38.200 "data_offset": 2048, 00:16:38.200 "data_size": 63488 00:16:38.200 }, 00:16:38.200 { 00:16:38.200 "name": "BaseBdev3", 00:16:38.200 "uuid": "3c0557b9-1656-44d1-af51-f585b62098b7", 00:16:38.200 "is_configured": true, 00:16:38.200 "data_offset": 2048, 00:16:38.200 "data_size": 63488 00:16:38.200 }, 00:16:38.200 { 00:16:38.200 "name": "BaseBdev4", 00:16:38.200 "uuid": "7f27ed31-7c6d-4554-b1f8-bd221ea013ee", 00:16:38.200 "is_configured": true, 00:16:38.200 "data_offset": 2048, 00:16:38.200 "data_size": 63488 00:16:38.200 } 00:16:38.200 ] 00:16:38.200 }' 00:16:38.200 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.200 09:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.459 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:38.459 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:38.459 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:38.459 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:38.459 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:38.459 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:38.459 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:38.459 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:38.459 09:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.459 09:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.459 [2024-11-15 09:35:26.906088] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.719 09:35:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.719 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:38.719 "name": "Existed_Raid", 00:16:38.719 "aliases": [ 00:16:38.719 "71cd159b-cf78-41e5-9914-3fb83f83ec45" 00:16:38.719 ], 00:16:38.719 "product_name": "Raid Volume", 00:16:38.719 "block_size": 512, 00:16:38.719 "num_blocks": 190464, 00:16:38.719 "uuid": "71cd159b-cf78-41e5-9914-3fb83f83ec45", 00:16:38.719 "assigned_rate_limits": { 00:16:38.719 "rw_ios_per_sec": 0, 00:16:38.719 "rw_mbytes_per_sec": 0, 00:16:38.719 "r_mbytes_per_sec": 0, 00:16:38.719 "w_mbytes_per_sec": 0 00:16:38.719 }, 00:16:38.719 "claimed": false, 00:16:38.719 "zoned": false, 00:16:38.719 "supported_io_types": { 00:16:38.719 "read": true, 00:16:38.719 "write": true, 00:16:38.719 "unmap": false, 00:16:38.719 "flush": false, 00:16:38.719 "reset": true, 00:16:38.719 "nvme_admin": false, 00:16:38.719 "nvme_io": false, 00:16:38.719 "nvme_io_md": false, 00:16:38.719 "write_zeroes": true, 00:16:38.719 "zcopy": false, 00:16:38.719 "get_zone_info": false, 00:16:38.719 "zone_management": false, 00:16:38.719 "zone_append": false, 00:16:38.719 "compare": false, 00:16:38.719 "compare_and_write": false, 00:16:38.719 "abort": false, 00:16:38.719 "seek_hole": false, 00:16:38.719 "seek_data": false, 00:16:38.719 "copy": false, 00:16:38.719 "nvme_iov_md": false 00:16:38.719 }, 00:16:38.719 "driver_specific": { 00:16:38.719 "raid": { 00:16:38.719 "uuid": "71cd159b-cf78-41e5-9914-3fb83f83ec45", 00:16:38.719 "strip_size_kb": 64, 00:16:38.719 "state": "online", 00:16:38.719 "raid_level": "raid5f", 00:16:38.719 "superblock": true, 00:16:38.719 "num_base_bdevs": 4, 00:16:38.719 "num_base_bdevs_discovered": 4, 00:16:38.719 "num_base_bdevs_operational": 4, 00:16:38.719 "base_bdevs_list": [ 00:16:38.719 { 00:16:38.719 "name": "BaseBdev1", 00:16:38.719 "uuid": "a354d3d3-cb07-4b39-b727-bf75d1876ec7", 00:16:38.719 "is_configured": true, 00:16:38.719 "data_offset": 2048, 00:16:38.719 "data_size": 63488 00:16:38.719 }, 00:16:38.719 { 00:16:38.719 "name": "BaseBdev2", 00:16:38.719 "uuid": "2b9fac9a-b810-4dc5-bdbd-c3b00dc4ac9e", 00:16:38.719 "is_configured": true, 00:16:38.719 "data_offset": 2048, 00:16:38.719 "data_size": 63488 00:16:38.719 }, 00:16:38.719 { 00:16:38.719 "name": "BaseBdev3", 00:16:38.719 "uuid": "3c0557b9-1656-44d1-af51-f585b62098b7", 00:16:38.719 "is_configured": true, 00:16:38.719 "data_offset": 2048, 00:16:38.719 "data_size": 63488 00:16:38.719 }, 00:16:38.719 { 00:16:38.719 "name": "BaseBdev4", 00:16:38.719 "uuid": "7f27ed31-7c6d-4554-b1f8-bd221ea013ee", 00:16:38.719 "is_configured": true, 00:16:38.719 "data_offset": 2048, 00:16:38.719 "data_size": 63488 00:16:38.719 } 00:16:38.719 ] 00:16:38.719 } 00:16:38.719 } 00:16:38.719 }' 00:16:38.719 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:38.720 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:38.720 BaseBdev2 00:16:38.720 BaseBdev3 00:16:38.720 BaseBdev4' 00:16:38.720 09:35:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.720 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.980 [2024-11-15 09:35:27.225321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.980 "name": "Existed_Raid", 00:16:38.980 "uuid": "71cd159b-cf78-41e5-9914-3fb83f83ec45", 00:16:38.980 "strip_size_kb": 64, 00:16:38.980 "state": "online", 00:16:38.980 "raid_level": "raid5f", 00:16:38.980 "superblock": true, 00:16:38.980 "num_base_bdevs": 4, 00:16:38.980 "num_base_bdevs_discovered": 3, 00:16:38.980 "num_base_bdevs_operational": 3, 00:16:38.980 "base_bdevs_list": [ 00:16:38.980 { 00:16:38.980 "name": null, 00:16:38.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.980 "is_configured": false, 00:16:38.980 "data_offset": 0, 00:16:38.980 "data_size": 63488 00:16:38.980 }, 00:16:38.980 { 00:16:38.980 "name": "BaseBdev2", 00:16:38.980 "uuid": "2b9fac9a-b810-4dc5-bdbd-c3b00dc4ac9e", 00:16:38.980 "is_configured": true, 00:16:38.980 "data_offset": 2048, 00:16:38.980 "data_size": 63488 00:16:38.980 }, 00:16:38.980 { 00:16:38.980 "name": "BaseBdev3", 00:16:38.980 "uuid": "3c0557b9-1656-44d1-af51-f585b62098b7", 00:16:38.980 "is_configured": true, 00:16:38.980 "data_offset": 2048, 00:16:38.980 "data_size": 63488 00:16:38.980 }, 00:16:38.980 { 00:16:38.980 "name": "BaseBdev4", 00:16:38.980 "uuid": "7f27ed31-7c6d-4554-b1f8-bd221ea013ee", 00:16:38.980 "is_configured": true, 00:16:38.980 "data_offset": 2048, 00:16:38.980 "data_size": 63488 00:16:38.980 } 00:16:38.980 ] 00:16:38.980 }' 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.980 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.549 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:39.549 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:39.549 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.549 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.549 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.549 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:39.549 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.549 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:39.549 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:39.549 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:39.549 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.549 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.549 [2024-11-15 09:35:27.851848] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:39.549 [2024-11-15 09:35:27.852126] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:39.549 [2024-11-15 09:35:27.958034] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.549 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.549 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:39.549 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:39.549 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.549 09:35:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:39.549 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.549 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.549 09:35:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.809 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:39.809 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:39.809 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:39.809 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.809 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.809 [2024-11-15 09:35:28.021958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:39.809 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.809 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:39.809 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:39.809 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.809 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:39.809 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.809 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.809 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.809 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:39.809 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:39.809 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:39.809 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.809 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.809 [2024-11-15 09:35:28.186655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:39.809 [2024-11-15 09:35:28.186730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.069 BaseBdev2 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.069 [ 00:16:40.069 { 00:16:40.069 "name": "BaseBdev2", 00:16:40.069 "aliases": [ 00:16:40.069 "2bef24b6-cc86-4919-8f70-854aac4cd0b2" 00:16:40.069 ], 00:16:40.069 "product_name": "Malloc disk", 00:16:40.069 "block_size": 512, 00:16:40.069 "num_blocks": 65536, 00:16:40.069 "uuid": "2bef24b6-cc86-4919-8f70-854aac4cd0b2", 00:16:40.069 "assigned_rate_limits": { 00:16:40.069 "rw_ios_per_sec": 0, 00:16:40.069 "rw_mbytes_per_sec": 0, 00:16:40.069 "r_mbytes_per_sec": 0, 00:16:40.069 "w_mbytes_per_sec": 0 00:16:40.069 }, 00:16:40.069 "claimed": false, 00:16:40.069 "zoned": false, 00:16:40.069 "supported_io_types": { 00:16:40.069 "read": true, 00:16:40.069 "write": true, 00:16:40.069 "unmap": true, 00:16:40.069 "flush": true, 00:16:40.069 "reset": true, 00:16:40.069 "nvme_admin": false, 00:16:40.069 "nvme_io": false, 00:16:40.069 "nvme_io_md": false, 00:16:40.069 "write_zeroes": true, 00:16:40.069 "zcopy": true, 00:16:40.069 "get_zone_info": false, 00:16:40.069 "zone_management": false, 00:16:40.069 "zone_append": false, 00:16:40.069 "compare": false, 00:16:40.069 "compare_and_write": false, 00:16:40.069 "abort": true, 00:16:40.069 "seek_hole": false, 00:16:40.069 "seek_data": false, 00:16:40.069 "copy": true, 00:16:40.069 "nvme_iov_md": false 00:16:40.069 }, 00:16:40.069 "memory_domains": [ 00:16:40.069 { 00:16:40.069 "dma_device_id": "system", 00:16:40.069 "dma_device_type": 1 00:16:40.069 }, 00:16:40.069 { 00:16:40.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.069 "dma_device_type": 2 00:16:40.069 } 00:16:40.069 ], 00:16:40.069 "driver_specific": {} 00:16:40.069 } 00:16:40.069 ] 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.069 BaseBdev3 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.069 [ 00:16:40.069 { 00:16:40.069 "name": "BaseBdev3", 00:16:40.069 "aliases": [ 00:16:40.069 "b25fb82b-cc11-4496-a1f7-d707cb386428" 00:16:40.069 ], 00:16:40.069 "product_name": "Malloc disk", 00:16:40.069 "block_size": 512, 00:16:40.069 "num_blocks": 65536, 00:16:40.069 "uuid": "b25fb82b-cc11-4496-a1f7-d707cb386428", 00:16:40.069 "assigned_rate_limits": { 00:16:40.069 "rw_ios_per_sec": 0, 00:16:40.069 "rw_mbytes_per_sec": 0, 00:16:40.069 "r_mbytes_per_sec": 0, 00:16:40.069 "w_mbytes_per_sec": 0 00:16:40.069 }, 00:16:40.069 "claimed": false, 00:16:40.069 "zoned": false, 00:16:40.069 "supported_io_types": { 00:16:40.069 "read": true, 00:16:40.069 "write": true, 00:16:40.069 "unmap": true, 00:16:40.069 "flush": true, 00:16:40.069 "reset": true, 00:16:40.069 "nvme_admin": false, 00:16:40.069 "nvme_io": false, 00:16:40.069 "nvme_io_md": false, 00:16:40.069 "write_zeroes": true, 00:16:40.069 "zcopy": true, 00:16:40.069 "get_zone_info": false, 00:16:40.069 "zone_management": false, 00:16:40.069 "zone_append": false, 00:16:40.069 "compare": false, 00:16:40.069 "compare_and_write": false, 00:16:40.069 "abort": true, 00:16:40.069 "seek_hole": false, 00:16:40.069 "seek_data": false, 00:16:40.069 "copy": true, 00:16:40.069 "nvme_iov_md": false 00:16:40.069 }, 00:16:40.069 "memory_domains": [ 00:16:40.069 { 00:16:40.069 "dma_device_id": "system", 00:16:40.069 "dma_device_type": 1 00:16:40.069 }, 00:16:40.069 { 00:16:40.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.069 "dma_device_type": 2 00:16:40.069 } 00:16:40.069 ], 00:16:40.069 "driver_specific": {} 00:16:40.069 } 00:16:40.069 ] 00:16:40.069 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.070 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:40.070 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:40.070 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:40.070 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:40.070 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.070 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.329 BaseBdev4 00:16:40.329 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.329 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:40.329 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:16:40.329 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:40.329 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:40.329 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:40.329 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:40.329 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:40.329 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.329 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.329 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.329 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:40.329 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.329 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.329 [ 00:16:40.329 { 00:16:40.329 "name": "BaseBdev4", 00:16:40.329 "aliases": [ 00:16:40.329 "8c9128f0-71c7-4bbe-a256-d963cde42024" 00:16:40.329 ], 00:16:40.329 "product_name": "Malloc disk", 00:16:40.329 "block_size": 512, 00:16:40.329 "num_blocks": 65536, 00:16:40.329 "uuid": "8c9128f0-71c7-4bbe-a256-d963cde42024", 00:16:40.329 "assigned_rate_limits": { 00:16:40.329 "rw_ios_per_sec": 0, 00:16:40.329 "rw_mbytes_per_sec": 0, 00:16:40.329 "r_mbytes_per_sec": 0, 00:16:40.329 "w_mbytes_per_sec": 0 00:16:40.329 }, 00:16:40.329 "claimed": false, 00:16:40.329 "zoned": false, 00:16:40.329 "supported_io_types": { 00:16:40.329 "read": true, 00:16:40.329 "write": true, 00:16:40.329 "unmap": true, 00:16:40.329 "flush": true, 00:16:40.329 "reset": true, 00:16:40.329 "nvme_admin": false, 00:16:40.329 "nvme_io": false, 00:16:40.329 "nvme_io_md": false, 00:16:40.329 "write_zeroes": true, 00:16:40.329 "zcopy": true, 00:16:40.329 "get_zone_info": false, 00:16:40.329 "zone_management": false, 00:16:40.329 "zone_append": false, 00:16:40.329 "compare": false, 00:16:40.329 "compare_and_write": false, 00:16:40.329 "abort": true, 00:16:40.329 "seek_hole": false, 00:16:40.329 "seek_data": false, 00:16:40.329 "copy": true, 00:16:40.329 "nvme_iov_md": false 00:16:40.329 }, 00:16:40.329 "memory_domains": [ 00:16:40.329 { 00:16:40.329 "dma_device_id": "system", 00:16:40.329 "dma_device_type": 1 00:16:40.329 }, 00:16:40.329 { 00:16:40.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.329 "dma_device_type": 2 00:16:40.329 } 00:16:40.329 ], 00:16:40.329 "driver_specific": {} 00:16:40.329 } 00:16:40.329 ] 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.330 [2024-11-15 09:35:28.625886] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:40.330 [2024-11-15 09:35:28.626010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:40.330 [2024-11-15 09:35:28.626059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:40.330 [2024-11-15 09:35:28.628247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:40.330 [2024-11-15 09:35:28.628346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.330 "name": "Existed_Raid", 00:16:40.330 "uuid": "8d42ba93-dcac-407d-98cf-82d4ec38ab48", 00:16:40.330 "strip_size_kb": 64, 00:16:40.330 "state": "configuring", 00:16:40.330 "raid_level": "raid5f", 00:16:40.330 "superblock": true, 00:16:40.330 "num_base_bdevs": 4, 00:16:40.330 "num_base_bdevs_discovered": 3, 00:16:40.330 "num_base_bdevs_operational": 4, 00:16:40.330 "base_bdevs_list": [ 00:16:40.330 { 00:16:40.330 "name": "BaseBdev1", 00:16:40.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.330 "is_configured": false, 00:16:40.330 "data_offset": 0, 00:16:40.330 "data_size": 0 00:16:40.330 }, 00:16:40.330 { 00:16:40.330 "name": "BaseBdev2", 00:16:40.330 "uuid": "2bef24b6-cc86-4919-8f70-854aac4cd0b2", 00:16:40.330 "is_configured": true, 00:16:40.330 "data_offset": 2048, 00:16:40.330 "data_size": 63488 00:16:40.330 }, 00:16:40.330 { 00:16:40.330 "name": "BaseBdev3", 00:16:40.330 "uuid": "b25fb82b-cc11-4496-a1f7-d707cb386428", 00:16:40.330 "is_configured": true, 00:16:40.330 "data_offset": 2048, 00:16:40.330 "data_size": 63488 00:16:40.330 }, 00:16:40.330 { 00:16:40.330 "name": "BaseBdev4", 00:16:40.330 "uuid": "8c9128f0-71c7-4bbe-a256-d963cde42024", 00:16:40.330 "is_configured": true, 00:16:40.330 "data_offset": 2048, 00:16:40.330 "data_size": 63488 00:16:40.330 } 00:16:40.330 ] 00:16:40.330 }' 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.330 09:35:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.589 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:40.589 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.589 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.589 [2024-11-15 09:35:29.033191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:40.589 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.589 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:40.589 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.589 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.589 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.589 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.589 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:40.589 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.589 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.589 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.589 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.589 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.590 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.590 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.590 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.849 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.849 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.849 "name": "Existed_Raid", 00:16:40.849 "uuid": "8d42ba93-dcac-407d-98cf-82d4ec38ab48", 00:16:40.849 "strip_size_kb": 64, 00:16:40.849 "state": "configuring", 00:16:40.849 "raid_level": "raid5f", 00:16:40.849 "superblock": true, 00:16:40.849 "num_base_bdevs": 4, 00:16:40.849 "num_base_bdevs_discovered": 2, 00:16:40.849 "num_base_bdevs_operational": 4, 00:16:40.849 "base_bdevs_list": [ 00:16:40.849 { 00:16:40.849 "name": "BaseBdev1", 00:16:40.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.849 "is_configured": false, 00:16:40.849 "data_offset": 0, 00:16:40.849 "data_size": 0 00:16:40.849 }, 00:16:40.849 { 00:16:40.849 "name": null, 00:16:40.849 "uuid": "2bef24b6-cc86-4919-8f70-854aac4cd0b2", 00:16:40.849 "is_configured": false, 00:16:40.849 "data_offset": 0, 00:16:40.849 "data_size": 63488 00:16:40.849 }, 00:16:40.849 { 00:16:40.849 "name": "BaseBdev3", 00:16:40.849 "uuid": "b25fb82b-cc11-4496-a1f7-d707cb386428", 00:16:40.849 "is_configured": true, 00:16:40.849 "data_offset": 2048, 00:16:40.849 "data_size": 63488 00:16:40.849 }, 00:16:40.849 { 00:16:40.849 "name": "BaseBdev4", 00:16:40.849 "uuid": "8c9128f0-71c7-4bbe-a256-d963cde42024", 00:16:40.849 "is_configured": true, 00:16:40.849 "data_offset": 2048, 00:16:40.849 "data_size": 63488 00:16:40.849 } 00:16:40.849 ] 00:16:40.849 }' 00:16:40.849 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.849 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.108 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:41.108 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.108 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.108 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.108 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.108 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:41.108 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:41.108 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.108 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.108 [2024-11-15 09:35:29.555913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:41.108 BaseBdev1 00:16:41.108 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.108 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:41.108 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:41.108 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:41.108 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:41.108 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:41.108 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:41.108 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:41.108 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.108 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.108 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.108 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:41.108 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.108 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.367 [ 00:16:41.367 { 00:16:41.367 "name": "BaseBdev1", 00:16:41.367 "aliases": [ 00:16:41.367 "1002dbdb-aa00-4025-8501-4f6cef4d023e" 00:16:41.367 ], 00:16:41.367 "product_name": "Malloc disk", 00:16:41.367 "block_size": 512, 00:16:41.367 "num_blocks": 65536, 00:16:41.367 "uuid": "1002dbdb-aa00-4025-8501-4f6cef4d023e", 00:16:41.367 "assigned_rate_limits": { 00:16:41.367 "rw_ios_per_sec": 0, 00:16:41.367 "rw_mbytes_per_sec": 0, 00:16:41.367 "r_mbytes_per_sec": 0, 00:16:41.367 "w_mbytes_per_sec": 0 00:16:41.367 }, 00:16:41.367 "claimed": true, 00:16:41.367 "claim_type": "exclusive_write", 00:16:41.367 "zoned": false, 00:16:41.367 "supported_io_types": { 00:16:41.367 "read": true, 00:16:41.367 "write": true, 00:16:41.368 "unmap": true, 00:16:41.368 "flush": true, 00:16:41.368 "reset": true, 00:16:41.368 "nvme_admin": false, 00:16:41.368 "nvme_io": false, 00:16:41.368 "nvme_io_md": false, 00:16:41.368 "write_zeroes": true, 00:16:41.368 "zcopy": true, 00:16:41.368 "get_zone_info": false, 00:16:41.368 "zone_management": false, 00:16:41.368 "zone_append": false, 00:16:41.368 "compare": false, 00:16:41.368 "compare_and_write": false, 00:16:41.368 "abort": true, 00:16:41.368 "seek_hole": false, 00:16:41.368 "seek_data": false, 00:16:41.368 "copy": true, 00:16:41.368 "nvme_iov_md": false 00:16:41.368 }, 00:16:41.368 "memory_domains": [ 00:16:41.368 { 00:16:41.368 "dma_device_id": "system", 00:16:41.368 "dma_device_type": 1 00:16:41.368 }, 00:16:41.368 { 00:16:41.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.368 "dma_device_type": 2 00:16:41.368 } 00:16:41.368 ], 00:16:41.368 "driver_specific": {} 00:16:41.368 } 00:16:41.368 ] 00:16:41.368 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.368 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:41.368 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:41.368 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.368 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.368 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.368 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.368 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.368 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.368 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.368 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.368 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.368 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.368 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.368 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.368 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.368 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.368 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.368 "name": "Existed_Raid", 00:16:41.368 "uuid": "8d42ba93-dcac-407d-98cf-82d4ec38ab48", 00:16:41.368 "strip_size_kb": 64, 00:16:41.368 "state": "configuring", 00:16:41.368 "raid_level": "raid5f", 00:16:41.368 "superblock": true, 00:16:41.368 "num_base_bdevs": 4, 00:16:41.368 "num_base_bdevs_discovered": 3, 00:16:41.368 "num_base_bdevs_operational": 4, 00:16:41.368 "base_bdevs_list": [ 00:16:41.368 { 00:16:41.368 "name": "BaseBdev1", 00:16:41.368 "uuid": "1002dbdb-aa00-4025-8501-4f6cef4d023e", 00:16:41.368 "is_configured": true, 00:16:41.368 "data_offset": 2048, 00:16:41.368 "data_size": 63488 00:16:41.368 }, 00:16:41.368 { 00:16:41.368 "name": null, 00:16:41.368 "uuid": "2bef24b6-cc86-4919-8f70-854aac4cd0b2", 00:16:41.368 "is_configured": false, 00:16:41.368 "data_offset": 0, 00:16:41.368 "data_size": 63488 00:16:41.368 }, 00:16:41.368 { 00:16:41.368 "name": "BaseBdev3", 00:16:41.368 "uuid": "b25fb82b-cc11-4496-a1f7-d707cb386428", 00:16:41.368 "is_configured": true, 00:16:41.368 "data_offset": 2048, 00:16:41.368 "data_size": 63488 00:16:41.368 }, 00:16:41.368 { 00:16:41.368 "name": "BaseBdev4", 00:16:41.368 "uuid": "8c9128f0-71c7-4bbe-a256-d963cde42024", 00:16:41.368 "is_configured": true, 00:16:41.368 "data_offset": 2048, 00:16:41.368 "data_size": 63488 00:16:41.368 } 00:16:41.368 ] 00:16:41.368 }' 00:16:41.368 09:35:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.368 09:35:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.628 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.628 09:35:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.628 09:35:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.628 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:41.628 09:35:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.628 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:41.628 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:41.628 09:35:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.628 09:35:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.628 [2024-11-15 09:35:30.083088] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:41.628 09:35:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.628 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:41.628 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.628 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.628 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.628 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.628 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.628 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.628 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.628 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.628 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.888 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.888 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.888 09:35:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.888 09:35:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.888 09:35:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.888 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.888 "name": "Existed_Raid", 00:16:41.888 "uuid": "8d42ba93-dcac-407d-98cf-82d4ec38ab48", 00:16:41.888 "strip_size_kb": 64, 00:16:41.888 "state": "configuring", 00:16:41.888 "raid_level": "raid5f", 00:16:41.888 "superblock": true, 00:16:41.888 "num_base_bdevs": 4, 00:16:41.888 "num_base_bdevs_discovered": 2, 00:16:41.888 "num_base_bdevs_operational": 4, 00:16:41.888 "base_bdevs_list": [ 00:16:41.888 { 00:16:41.888 "name": "BaseBdev1", 00:16:41.888 "uuid": "1002dbdb-aa00-4025-8501-4f6cef4d023e", 00:16:41.888 "is_configured": true, 00:16:41.888 "data_offset": 2048, 00:16:41.888 "data_size": 63488 00:16:41.888 }, 00:16:41.888 { 00:16:41.888 "name": null, 00:16:41.888 "uuid": "2bef24b6-cc86-4919-8f70-854aac4cd0b2", 00:16:41.888 "is_configured": false, 00:16:41.888 "data_offset": 0, 00:16:41.888 "data_size": 63488 00:16:41.888 }, 00:16:41.888 { 00:16:41.888 "name": null, 00:16:41.888 "uuid": "b25fb82b-cc11-4496-a1f7-d707cb386428", 00:16:41.888 "is_configured": false, 00:16:41.888 "data_offset": 0, 00:16:41.888 "data_size": 63488 00:16:41.888 }, 00:16:41.888 { 00:16:41.888 "name": "BaseBdev4", 00:16:41.888 "uuid": "8c9128f0-71c7-4bbe-a256-d963cde42024", 00:16:41.888 "is_configured": true, 00:16:41.888 "data_offset": 2048, 00:16:41.888 "data_size": 63488 00:16:41.888 } 00:16:41.888 ] 00:16:41.888 }' 00:16:41.888 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.888 09:35:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.147 [2024-11-15 09:35:30.562263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.147 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.147 "name": "Existed_Raid", 00:16:42.147 "uuid": "8d42ba93-dcac-407d-98cf-82d4ec38ab48", 00:16:42.147 "strip_size_kb": 64, 00:16:42.147 "state": "configuring", 00:16:42.147 "raid_level": "raid5f", 00:16:42.147 "superblock": true, 00:16:42.147 "num_base_bdevs": 4, 00:16:42.147 "num_base_bdevs_discovered": 3, 00:16:42.147 "num_base_bdevs_operational": 4, 00:16:42.147 "base_bdevs_list": [ 00:16:42.147 { 00:16:42.147 "name": "BaseBdev1", 00:16:42.147 "uuid": "1002dbdb-aa00-4025-8501-4f6cef4d023e", 00:16:42.147 "is_configured": true, 00:16:42.147 "data_offset": 2048, 00:16:42.147 "data_size": 63488 00:16:42.148 }, 00:16:42.148 { 00:16:42.148 "name": null, 00:16:42.148 "uuid": "2bef24b6-cc86-4919-8f70-854aac4cd0b2", 00:16:42.148 "is_configured": false, 00:16:42.148 "data_offset": 0, 00:16:42.148 "data_size": 63488 00:16:42.148 }, 00:16:42.148 { 00:16:42.148 "name": "BaseBdev3", 00:16:42.148 "uuid": "b25fb82b-cc11-4496-a1f7-d707cb386428", 00:16:42.148 "is_configured": true, 00:16:42.148 "data_offset": 2048, 00:16:42.148 "data_size": 63488 00:16:42.148 }, 00:16:42.148 { 00:16:42.148 "name": "BaseBdev4", 00:16:42.148 "uuid": "8c9128f0-71c7-4bbe-a256-d963cde42024", 00:16:42.148 "is_configured": true, 00:16:42.148 "data_offset": 2048, 00:16:42.148 "data_size": 63488 00:16:42.148 } 00:16:42.148 ] 00:16:42.148 }' 00:16:42.148 09:35:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.148 09:35:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.718 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:42.718 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.718 09:35:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.718 09:35:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.718 09:35:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.718 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:42.718 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:42.718 09:35:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.718 09:35:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.718 [2024-11-15 09:35:31.085428] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:42.977 09:35:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.977 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:42.977 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.977 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.977 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.977 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.977 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.977 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.977 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.977 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.977 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.977 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.977 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.977 09:35:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.977 09:35:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.977 09:35:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.978 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.978 "name": "Existed_Raid", 00:16:42.978 "uuid": "8d42ba93-dcac-407d-98cf-82d4ec38ab48", 00:16:42.978 "strip_size_kb": 64, 00:16:42.978 "state": "configuring", 00:16:42.978 "raid_level": "raid5f", 00:16:42.978 "superblock": true, 00:16:42.978 "num_base_bdevs": 4, 00:16:42.978 "num_base_bdevs_discovered": 2, 00:16:42.978 "num_base_bdevs_operational": 4, 00:16:42.978 "base_bdevs_list": [ 00:16:42.978 { 00:16:42.978 "name": null, 00:16:42.978 "uuid": "1002dbdb-aa00-4025-8501-4f6cef4d023e", 00:16:42.978 "is_configured": false, 00:16:42.978 "data_offset": 0, 00:16:42.978 "data_size": 63488 00:16:42.978 }, 00:16:42.978 { 00:16:42.978 "name": null, 00:16:42.978 "uuid": "2bef24b6-cc86-4919-8f70-854aac4cd0b2", 00:16:42.978 "is_configured": false, 00:16:42.978 "data_offset": 0, 00:16:42.978 "data_size": 63488 00:16:42.978 }, 00:16:42.978 { 00:16:42.978 "name": "BaseBdev3", 00:16:42.978 "uuid": "b25fb82b-cc11-4496-a1f7-d707cb386428", 00:16:42.978 "is_configured": true, 00:16:42.978 "data_offset": 2048, 00:16:42.978 "data_size": 63488 00:16:42.978 }, 00:16:42.978 { 00:16:42.978 "name": "BaseBdev4", 00:16:42.978 "uuid": "8c9128f0-71c7-4bbe-a256-d963cde42024", 00:16:42.978 "is_configured": true, 00:16:42.978 "data_offset": 2048, 00:16:42.978 "data_size": 63488 00:16:42.978 } 00:16:42.978 ] 00:16:42.978 }' 00:16:42.978 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.978 09:35:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.236 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:43.236 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.236 09:35:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.236 09:35:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.236 09:35:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.236 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:43.236 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:43.236 09:35:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.236 09:35:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.236 [2024-11-15 09:35:31.697212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.495 09:35:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.495 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:43.495 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.495 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.495 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.495 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.495 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.495 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.495 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.495 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.495 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.495 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.495 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.495 09:35:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.495 09:35:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.495 09:35:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.495 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.495 "name": "Existed_Raid", 00:16:43.495 "uuid": "8d42ba93-dcac-407d-98cf-82d4ec38ab48", 00:16:43.495 "strip_size_kb": 64, 00:16:43.495 "state": "configuring", 00:16:43.495 "raid_level": "raid5f", 00:16:43.495 "superblock": true, 00:16:43.495 "num_base_bdevs": 4, 00:16:43.495 "num_base_bdevs_discovered": 3, 00:16:43.495 "num_base_bdevs_operational": 4, 00:16:43.495 "base_bdevs_list": [ 00:16:43.495 { 00:16:43.495 "name": null, 00:16:43.495 "uuid": "1002dbdb-aa00-4025-8501-4f6cef4d023e", 00:16:43.495 "is_configured": false, 00:16:43.495 "data_offset": 0, 00:16:43.495 "data_size": 63488 00:16:43.495 }, 00:16:43.495 { 00:16:43.495 "name": "BaseBdev2", 00:16:43.495 "uuid": "2bef24b6-cc86-4919-8f70-854aac4cd0b2", 00:16:43.495 "is_configured": true, 00:16:43.495 "data_offset": 2048, 00:16:43.495 "data_size": 63488 00:16:43.495 }, 00:16:43.495 { 00:16:43.495 "name": "BaseBdev3", 00:16:43.495 "uuid": "b25fb82b-cc11-4496-a1f7-d707cb386428", 00:16:43.495 "is_configured": true, 00:16:43.495 "data_offset": 2048, 00:16:43.495 "data_size": 63488 00:16:43.495 }, 00:16:43.495 { 00:16:43.495 "name": "BaseBdev4", 00:16:43.495 "uuid": "8c9128f0-71c7-4bbe-a256-d963cde42024", 00:16:43.495 "is_configured": true, 00:16:43.495 "data_offset": 2048, 00:16:43.495 "data_size": 63488 00:16:43.495 } 00:16:43.495 ] 00:16:43.495 }' 00:16:43.495 09:35:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.495 09:35:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.754 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:43.754 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.754 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.754 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.754 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.754 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:43.754 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.754 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:43.754 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.754 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.754 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1002dbdb-aa00-4025-8501-4f6cef4d023e 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.014 [2024-11-15 09:35:32.299516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:44.014 [2024-11-15 09:35:32.299822] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:44.014 [2024-11-15 09:35:32.299838] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:44.014 NewBaseBdev 00:16:44.014 [2024-11-15 09:35:32.300199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.014 [2024-11-15 09:35:32.308400] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:44.014 [2024-11-15 09:35:32.308428] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:44.014 [2024-11-15 09:35:32.308736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.014 [ 00:16:44.014 { 00:16:44.014 "name": "NewBaseBdev", 00:16:44.014 "aliases": [ 00:16:44.014 "1002dbdb-aa00-4025-8501-4f6cef4d023e" 00:16:44.014 ], 00:16:44.014 "product_name": "Malloc disk", 00:16:44.014 "block_size": 512, 00:16:44.014 "num_blocks": 65536, 00:16:44.014 "uuid": "1002dbdb-aa00-4025-8501-4f6cef4d023e", 00:16:44.014 "assigned_rate_limits": { 00:16:44.014 "rw_ios_per_sec": 0, 00:16:44.014 "rw_mbytes_per_sec": 0, 00:16:44.014 "r_mbytes_per_sec": 0, 00:16:44.014 "w_mbytes_per_sec": 0 00:16:44.014 }, 00:16:44.014 "claimed": true, 00:16:44.014 "claim_type": "exclusive_write", 00:16:44.014 "zoned": false, 00:16:44.014 "supported_io_types": { 00:16:44.014 "read": true, 00:16:44.014 "write": true, 00:16:44.014 "unmap": true, 00:16:44.014 "flush": true, 00:16:44.014 "reset": true, 00:16:44.014 "nvme_admin": false, 00:16:44.014 "nvme_io": false, 00:16:44.014 "nvme_io_md": false, 00:16:44.014 "write_zeroes": true, 00:16:44.014 "zcopy": true, 00:16:44.014 "get_zone_info": false, 00:16:44.014 "zone_management": false, 00:16:44.014 "zone_append": false, 00:16:44.014 "compare": false, 00:16:44.014 "compare_and_write": false, 00:16:44.014 "abort": true, 00:16:44.014 "seek_hole": false, 00:16:44.014 "seek_data": false, 00:16:44.014 "copy": true, 00:16:44.014 "nvme_iov_md": false 00:16:44.014 }, 00:16:44.014 "memory_domains": [ 00:16:44.014 { 00:16:44.014 "dma_device_id": "system", 00:16:44.014 "dma_device_type": 1 00:16:44.014 }, 00:16:44.014 { 00:16:44.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.014 "dma_device_type": 2 00:16:44.014 } 00:16:44.014 ], 00:16:44.014 "driver_specific": {} 00:16:44.014 } 00:16:44.014 ] 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.014 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.014 "name": "Existed_Raid", 00:16:44.014 "uuid": "8d42ba93-dcac-407d-98cf-82d4ec38ab48", 00:16:44.014 "strip_size_kb": 64, 00:16:44.014 "state": "online", 00:16:44.014 "raid_level": "raid5f", 00:16:44.014 "superblock": true, 00:16:44.014 "num_base_bdevs": 4, 00:16:44.014 "num_base_bdevs_discovered": 4, 00:16:44.014 "num_base_bdevs_operational": 4, 00:16:44.014 "base_bdevs_list": [ 00:16:44.014 { 00:16:44.015 "name": "NewBaseBdev", 00:16:44.015 "uuid": "1002dbdb-aa00-4025-8501-4f6cef4d023e", 00:16:44.015 "is_configured": true, 00:16:44.015 "data_offset": 2048, 00:16:44.015 "data_size": 63488 00:16:44.015 }, 00:16:44.015 { 00:16:44.015 "name": "BaseBdev2", 00:16:44.015 "uuid": "2bef24b6-cc86-4919-8f70-854aac4cd0b2", 00:16:44.015 "is_configured": true, 00:16:44.015 "data_offset": 2048, 00:16:44.015 "data_size": 63488 00:16:44.015 }, 00:16:44.015 { 00:16:44.015 "name": "BaseBdev3", 00:16:44.015 "uuid": "b25fb82b-cc11-4496-a1f7-d707cb386428", 00:16:44.015 "is_configured": true, 00:16:44.015 "data_offset": 2048, 00:16:44.015 "data_size": 63488 00:16:44.015 }, 00:16:44.015 { 00:16:44.015 "name": "BaseBdev4", 00:16:44.015 "uuid": "8c9128f0-71c7-4bbe-a256-d963cde42024", 00:16:44.015 "is_configured": true, 00:16:44.015 "data_offset": 2048, 00:16:44.015 "data_size": 63488 00:16:44.015 } 00:16:44.015 ] 00:16:44.015 }' 00:16:44.015 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.015 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.584 [2024-11-15 09:35:32.794638] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:44.584 "name": "Existed_Raid", 00:16:44.584 "aliases": [ 00:16:44.584 "8d42ba93-dcac-407d-98cf-82d4ec38ab48" 00:16:44.584 ], 00:16:44.584 "product_name": "Raid Volume", 00:16:44.584 "block_size": 512, 00:16:44.584 "num_blocks": 190464, 00:16:44.584 "uuid": "8d42ba93-dcac-407d-98cf-82d4ec38ab48", 00:16:44.584 "assigned_rate_limits": { 00:16:44.584 "rw_ios_per_sec": 0, 00:16:44.584 "rw_mbytes_per_sec": 0, 00:16:44.584 "r_mbytes_per_sec": 0, 00:16:44.584 "w_mbytes_per_sec": 0 00:16:44.584 }, 00:16:44.584 "claimed": false, 00:16:44.584 "zoned": false, 00:16:44.584 "supported_io_types": { 00:16:44.584 "read": true, 00:16:44.584 "write": true, 00:16:44.584 "unmap": false, 00:16:44.584 "flush": false, 00:16:44.584 "reset": true, 00:16:44.584 "nvme_admin": false, 00:16:44.584 "nvme_io": false, 00:16:44.584 "nvme_io_md": false, 00:16:44.584 "write_zeroes": true, 00:16:44.584 "zcopy": false, 00:16:44.584 "get_zone_info": false, 00:16:44.584 "zone_management": false, 00:16:44.584 "zone_append": false, 00:16:44.584 "compare": false, 00:16:44.584 "compare_and_write": false, 00:16:44.584 "abort": false, 00:16:44.584 "seek_hole": false, 00:16:44.584 "seek_data": false, 00:16:44.584 "copy": false, 00:16:44.584 "nvme_iov_md": false 00:16:44.584 }, 00:16:44.584 "driver_specific": { 00:16:44.584 "raid": { 00:16:44.584 "uuid": "8d42ba93-dcac-407d-98cf-82d4ec38ab48", 00:16:44.584 "strip_size_kb": 64, 00:16:44.584 "state": "online", 00:16:44.584 "raid_level": "raid5f", 00:16:44.584 "superblock": true, 00:16:44.584 "num_base_bdevs": 4, 00:16:44.584 "num_base_bdevs_discovered": 4, 00:16:44.584 "num_base_bdevs_operational": 4, 00:16:44.584 "base_bdevs_list": [ 00:16:44.584 { 00:16:44.584 "name": "NewBaseBdev", 00:16:44.584 "uuid": "1002dbdb-aa00-4025-8501-4f6cef4d023e", 00:16:44.584 "is_configured": true, 00:16:44.584 "data_offset": 2048, 00:16:44.584 "data_size": 63488 00:16:44.584 }, 00:16:44.584 { 00:16:44.584 "name": "BaseBdev2", 00:16:44.584 "uuid": "2bef24b6-cc86-4919-8f70-854aac4cd0b2", 00:16:44.584 "is_configured": true, 00:16:44.584 "data_offset": 2048, 00:16:44.584 "data_size": 63488 00:16:44.584 }, 00:16:44.584 { 00:16:44.584 "name": "BaseBdev3", 00:16:44.584 "uuid": "b25fb82b-cc11-4496-a1f7-d707cb386428", 00:16:44.584 "is_configured": true, 00:16:44.584 "data_offset": 2048, 00:16:44.584 "data_size": 63488 00:16:44.584 }, 00:16:44.584 { 00:16:44.584 "name": "BaseBdev4", 00:16:44.584 "uuid": "8c9128f0-71c7-4bbe-a256-d963cde42024", 00:16:44.584 "is_configured": true, 00:16:44.584 "data_offset": 2048, 00:16:44.584 "data_size": 63488 00:16:44.584 } 00:16:44.584 ] 00:16:44.584 } 00:16:44.584 } 00:16:44.584 }' 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:44.584 BaseBdev2 00:16:44.584 BaseBdev3 00:16:44.584 BaseBdev4' 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.584 09:35:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.584 09:35:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.584 09:35:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.584 09:35:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.584 09:35:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:44.584 09:35:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.584 09:35:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.584 09:35:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.584 09:35:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.844 09:35:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.844 09:35:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.844 09:35:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.844 09:35:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.844 09:35:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:44.844 09:35:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.844 09:35:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.844 09:35:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.844 09:35:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.844 09:35:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.844 09:35:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:44.844 09:35:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.844 09:35:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.844 [2024-11-15 09:35:33.125815] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:44.844 [2024-11-15 09:35:33.125875] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:44.844 [2024-11-15 09:35:33.125988] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:44.844 [2024-11-15 09:35:33.126323] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:44.844 [2024-11-15 09:35:33.126335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:44.844 09:35:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.844 09:35:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83866 00:16:44.844 09:35:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 83866 ']' 00:16:44.844 09:35:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 83866 00:16:44.844 09:35:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:16:44.844 09:35:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:44.844 09:35:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83866 00:16:44.844 09:35:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:44.844 09:35:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:44.844 09:35:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83866' 00:16:44.844 killing process with pid 83866 00:16:44.845 09:35:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 83866 00:16:44.845 [2024-11-15 09:35:33.168469] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:44.845 09:35:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 83866 00:16:45.414 [2024-11-15 09:35:33.629089] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:46.800 09:35:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:46.800 00:16:46.800 real 0m12.103s 00:16:46.800 user 0m18.780s 00:16:46.800 sys 0m2.401s 00:16:46.800 ************************************ 00:16:46.800 END TEST raid5f_state_function_test_sb 00:16:46.800 ************************************ 00:16:46.800 09:35:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:46.800 09:35:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.800 09:35:34 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:46.800 09:35:34 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:46.800 09:35:34 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:46.800 09:35:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:46.800 ************************************ 00:16:46.800 START TEST raid5f_superblock_test 00:16:46.800 ************************************ 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 4 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84542 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84542 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 84542 ']' 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:46.800 09:35:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.800 [2024-11-15 09:35:35.099211] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:16:46.800 [2024-11-15 09:35:35.099521] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84542 ] 00:16:47.060 [2024-11-15 09:35:35.294998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.060 [2024-11-15 09:35:35.446186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.320 [2024-11-15 09:35:35.689912] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.320 [2024-11-15 09:35:35.690093] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.579 09:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:47.579 09:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:16:47.579 09:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:47.579 09:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:47.579 09:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:47.579 09:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:47.579 09:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:47.579 09:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:47.579 09:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:47.579 09:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:47.579 09:35:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:47.579 09:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.579 09:35:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.579 malloc1 00:16:47.579 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.579 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:47.579 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.579 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.579 [2024-11-15 09:35:36.007965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:47.579 [2024-11-15 09:35:36.008053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.579 [2024-11-15 09:35:36.008077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:47.579 [2024-11-15 09:35:36.008087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.579 [2024-11-15 09:35:36.010677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.579 [2024-11-15 09:35:36.010716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:47.579 pt1 00:16:47.579 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.580 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:47.580 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:47.580 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:47.580 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:47.580 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:47.580 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:47.580 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:47.580 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:47.580 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:47.580 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.580 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.840 malloc2 00:16:47.840 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.840 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:47.840 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.840 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.840 [2024-11-15 09:35:36.070762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:47.840 [2024-11-15 09:35:36.070981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.840 [2024-11-15 09:35:36.071032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:47.840 [2024-11-15 09:35:36.071085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.840 [2024-11-15 09:35:36.073685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.840 [2024-11-15 09:35:36.073762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:47.840 pt2 00:16:47.840 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.840 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.841 malloc3 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.841 [2024-11-15 09:35:36.152050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:47.841 [2024-11-15 09:35:36.152177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.841 [2024-11-15 09:35:36.152218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:47.841 [2024-11-15 09:35:36.152247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.841 [2024-11-15 09:35:36.154685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.841 [2024-11-15 09:35:36.154758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:47.841 pt3 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.841 malloc4 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.841 [2024-11-15 09:35:36.219998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:47.841 [2024-11-15 09:35:36.220140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.841 [2024-11-15 09:35:36.220169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:47.841 [2024-11-15 09:35:36.220180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.841 [2024-11-15 09:35:36.222628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.841 [2024-11-15 09:35:36.222668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:47.841 pt4 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.841 [2024-11-15 09:35:36.232057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:47.841 [2024-11-15 09:35:36.234293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:47.841 [2024-11-15 09:35:36.234363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:47.841 [2024-11-15 09:35:36.234426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:47.841 [2024-11-15 09:35:36.234636] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:47.841 [2024-11-15 09:35:36.234653] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:47.841 [2024-11-15 09:35:36.234946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:47.841 [2024-11-15 09:35:36.243004] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:47.841 [2024-11-15 09:35:36.243069] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:47.841 [2024-11-15 09:35:36.243364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.841 "name": "raid_bdev1", 00:16:47.841 "uuid": "4f8b74fd-27b0-426a-94c6-f8fc79ecd122", 00:16:47.841 "strip_size_kb": 64, 00:16:47.841 "state": "online", 00:16:47.841 "raid_level": "raid5f", 00:16:47.841 "superblock": true, 00:16:47.841 "num_base_bdevs": 4, 00:16:47.841 "num_base_bdevs_discovered": 4, 00:16:47.841 "num_base_bdevs_operational": 4, 00:16:47.841 "base_bdevs_list": [ 00:16:47.841 { 00:16:47.841 "name": "pt1", 00:16:47.841 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:47.841 "is_configured": true, 00:16:47.841 "data_offset": 2048, 00:16:47.841 "data_size": 63488 00:16:47.841 }, 00:16:47.841 { 00:16:47.841 "name": "pt2", 00:16:47.841 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:47.841 "is_configured": true, 00:16:47.841 "data_offset": 2048, 00:16:47.841 "data_size": 63488 00:16:47.841 }, 00:16:47.841 { 00:16:47.841 "name": "pt3", 00:16:47.841 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:47.841 "is_configured": true, 00:16:47.841 "data_offset": 2048, 00:16:47.841 "data_size": 63488 00:16:47.841 }, 00:16:47.841 { 00:16:47.841 "name": "pt4", 00:16:47.841 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:47.841 "is_configured": true, 00:16:47.841 "data_offset": 2048, 00:16:47.841 "data_size": 63488 00:16:47.841 } 00:16:47.841 ] 00:16:47.841 }' 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.841 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.410 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:48.410 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:48.410 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:48.410 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:48.410 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:48.410 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:48.410 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:48.410 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:48.410 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.410 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.410 [2024-11-15 09:35:36.725216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:48.410 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.410 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:48.410 "name": "raid_bdev1", 00:16:48.410 "aliases": [ 00:16:48.410 "4f8b74fd-27b0-426a-94c6-f8fc79ecd122" 00:16:48.410 ], 00:16:48.410 "product_name": "Raid Volume", 00:16:48.410 "block_size": 512, 00:16:48.410 "num_blocks": 190464, 00:16:48.410 "uuid": "4f8b74fd-27b0-426a-94c6-f8fc79ecd122", 00:16:48.410 "assigned_rate_limits": { 00:16:48.410 "rw_ios_per_sec": 0, 00:16:48.410 "rw_mbytes_per_sec": 0, 00:16:48.410 "r_mbytes_per_sec": 0, 00:16:48.410 "w_mbytes_per_sec": 0 00:16:48.410 }, 00:16:48.410 "claimed": false, 00:16:48.410 "zoned": false, 00:16:48.410 "supported_io_types": { 00:16:48.410 "read": true, 00:16:48.410 "write": true, 00:16:48.410 "unmap": false, 00:16:48.410 "flush": false, 00:16:48.410 "reset": true, 00:16:48.410 "nvme_admin": false, 00:16:48.410 "nvme_io": false, 00:16:48.410 "nvme_io_md": false, 00:16:48.410 "write_zeroes": true, 00:16:48.410 "zcopy": false, 00:16:48.410 "get_zone_info": false, 00:16:48.410 "zone_management": false, 00:16:48.410 "zone_append": false, 00:16:48.410 "compare": false, 00:16:48.410 "compare_and_write": false, 00:16:48.410 "abort": false, 00:16:48.410 "seek_hole": false, 00:16:48.410 "seek_data": false, 00:16:48.410 "copy": false, 00:16:48.410 "nvme_iov_md": false 00:16:48.410 }, 00:16:48.410 "driver_specific": { 00:16:48.410 "raid": { 00:16:48.410 "uuid": "4f8b74fd-27b0-426a-94c6-f8fc79ecd122", 00:16:48.410 "strip_size_kb": 64, 00:16:48.410 "state": "online", 00:16:48.410 "raid_level": "raid5f", 00:16:48.410 "superblock": true, 00:16:48.410 "num_base_bdevs": 4, 00:16:48.410 "num_base_bdevs_discovered": 4, 00:16:48.410 "num_base_bdevs_operational": 4, 00:16:48.410 "base_bdevs_list": [ 00:16:48.410 { 00:16:48.410 "name": "pt1", 00:16:48.410 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:48.410 "is_configured": true, 00:16:48.410 "data_offset": 2048, 00:16:48.410 "data_size": 63488 00:16:48.410 }, 00:16:48.410 { 00:16:48.410 "name": "pt2", 00:16:48.410 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.410 "is_configured": true, 00:16:48.410 "data_offset": 2048, 00:16:48.410 "data_size": 63488 00:16:48.410 }, 00:16:48.410 { 00:16:48.410 "name": "pt3", 00:16:48.410 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:48.410 "is_configured": true, 00:16:48.410 "data_offset": 2048, 00:16:48.410 "data_size": 63488 00:16:48.410 }, 00:16:48.410 { 00:16:48.410 "name": "pt4", 00:16:48.410 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:48.410 "is_configured": true, 00:16:48.410 "data_offset": 2048, 00:16:48.410 "data_size": 63488 00:16:48.410 } 00:16:48.410 ] 00:16:48.410 } 00:16:48.410 } 00:16:48.410 }' 00:16:48.410 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:48.410 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:48.410 pt2 00:16:48.410 pt3 00:16:48.410 pt4' 00:16:48.410 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.410 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:48.410 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.410 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:48.410 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.410 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.410 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.669 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.669 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:48.669 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:48.669 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.669 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:48.669 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.669 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.669 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.669 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.669 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:48.669 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:48.669 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.669 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:48.669 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.669 09:35:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.669 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.669 09:35:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.669 [2024-11-15 09:35:37.068614] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4f8b74fd-27b0-426a-94c6-f8fc79ecd122 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4f8b74fd-27b0-426a-94c6-f8fc79ecd122 ']' 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.669 [2024-11-15 09:35:37.112333] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:48.669 [2024-11-15 09:35:37.112424] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:48.669 [2024-11-15 09:35:37.112563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:48.669 [2024-11-15 09:35:37.112692] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:48.669 [2024-11-15 09:35:37.112746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.669 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.928 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.929 [2024-11-15 09:35:37.272079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:48.929 [2024-11-15 09:35:37.274419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:48.929 [2024-11-15 09:35:37.274489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:48.929 [2024-11-15 09:35:37.274528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:48.929 [2024-11-15 09:35:37.274595] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:48.929 [2024-11-15 09:35:37.274661] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:48.929 [2024-11-15 09:35:37.274683] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:48.929 [2024-11-15 09:35:37.274704] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:48.929 [2024-11-15 09:35:37.274719] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:48.929 [2024-11-15 09:35:37.274732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:48.929 request: 00:16:48.929 { 00:16:48.929 "name": "raid_bdev1", 00:16:48.929 "raid_level": "raid5f", 00:16:48.929 "base_bdevs": [ 00:16:48.929 "malloc1", 00:16:48.929 "malloc2", 00:16:48.929 "malloc3", 00:16:48.929 "malloc4" 00:16:48.929 ], 00:16:48.929 "strip_size_kb": 64, 00:16:48.929 "superblock": false, 00:16:48.929 "method": "bdev_raid_create", 00:16:48.929 "req_id": 1 00:16:48.929 } 00:16:48.929 Got JSON-RPC error response 00:16:48.929 response: 00:16:48.929 { 00:16:48.929 "code": -17, 00:16:48.929 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:48.929 } 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.929 [2024-11-15 09:35:37.328021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:48.929 [2024-11-15 09:35:37.328184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.929 [2024-11-15 09:35:37.328231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:48.929 [2024-11-15 09:35:37.328266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.929 [2024-11-15 09:35:37.330986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.929 [2024-11-15 09:35:37.331073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:48.929 [2024-11-15 09:35:37.331225] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:48.929 [2024-11-15 09:35:37.331332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:48.929 pt1 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.929 "name": "raid_bdev1", 00:16:48.929 "uuid": "4f8b74fd-27b0-426a-94c6-f8fc79ecd122", 00:16:48.929 "strip_size_kb": 64, 00:16:48.929 "state": "configuring", 00:16:48.929 "raid_level": "raid5f", 00:16:48.929 "superblock": true, 00:16:48.929 "num_base_bdevs": 4, 00:16:48.929 "num_base_bdevs_discovered": 1, 00:16:48.929 "num_base_bdevs_operational": 4, 00:16:48.929 "base_bdevs_list": [ 00:16:48.929 { 00:16:48.929 "name": "pt1", 00:16:48.929 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:48.929 "is_configured": true, 00:16:48.929 "data_offset": 2048, 00:16:48.929 "data_size": 63488 00:16:48.929 }, 00:16:48.929 { 00:16:48.929 "name": null, 00:16:48.929 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.929 "is_configured": false, 00:16:48.929 "data_offset": 2048, 00:16:48.929 "data_size": 63488 00:16:48.929 }, 00:16:48.929 { 00:16:48.929 "name": null, 00:16:48.929 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:48.929 "is_configured": false, 00:16:48.929 "data_offset": 2048, 00:16:48.929 "data_size": 63488 00:16:48.929 }, 00:16:48.929 { 00:16:48.929 "name": null, 00:16:48.929 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:48.929 "is_configured": false, 00:16:48.929 "data_offset": 2048, 00:16:48.929 "data_size": 63488 00:16:48.929 } 00:16:48.929 ] 00:16:48.929 }' 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.929 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.497 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:49.497 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:49.497 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.497 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.497 [2024-11-15 09:35:37.739308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:49.497 [2024-11-15 09:35:37.739509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.497 [2024-11-15 09:35:37.739559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:49.497 [2024-11-15 09:35:37.739594] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.497 [2024-11-15 09:35:37.740179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.497 [2024-11-15 09:35:37.740244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:49.497 [2024-11-15 09:35:37.740372] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:49.497 [2024-11-15 09:35:37.740429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:49.497 pt2 00:16:49.497 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.497 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:49.497 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.497 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.498 [2024-11-15 09:35:37.747258] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:49.498 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.498 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:49.498 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.498 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.498 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.498 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.498 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:49.498 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.498 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.498 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.498 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.498 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.498 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.498 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.498 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.498 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.498 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.498 "name": "raid_bdev1", 00:16:49.498 "uuid": "4f8b74fd-27b0-426a-94c6-f8fc79ecd122", 00:16:49.498 "strip_size_kb": 64, 00:16:49.498 "state": "configuring", 00:16:49.498 "raid_level": "raid5f", 00:16:49.498 "superblock": true, 00:16:49.498 "num_base_bdevs": 4, 00:16:49.498 "num_base_bdevs_discovered": 1, 00:16:49.498 "num_base_bdevs_operational": 4, 00:16:49.498 "base_bdevs_list": [ 00:16:49.498 { 00:16:49.498 "name": "pt1", 00:16:49.498 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:49.498 "is_configured": true, 00:16:49.498 "data_offset": 2048, 00:16:49.498 "data_size": 63488 00:16:49.498 }, 00:16:49.498 { 00:16:49.498 "name": null, 00:16:49.498 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:49.498 "is_configured": false, 00:16:49.498 "data_offset": 0, 00:16:49.498 "data_size": 63488 00:16:49.498 }, 00:16:49.498 { 00:16:49.498 "name": null, 00:16:49.498 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:49.498 "is_configured": false, 00:16:49.498 "data_offset": 2048, 00:16:49.498 "data_size": 63488 00:16:49.498 }, 00:16:49.498 { 00:16:49.498 "name": null, 00:16:49.498 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:49.498 "is_configured": false, 00:16:49.498 "data_offset": 2048, 00:16:49.498 "data_size": 63488 00:16:49.498 } 00:16:49.498 ] 00:16:49.498 }' 00:16:49.498 09:35:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.498 09:35:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.757 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:49.757 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:49.757 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:49.757 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.757 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.757 [2024-11-15 09:35:38.214570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:49.757 [2024-11-15 09:35:38.214726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.757 [2024-11-15 09:35:38.214768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:49.757 [2024-11-15 09:35:38.214799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.757 [2024-11-15 09:35:38.215384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.757 [2024-11-15 09:35:38.215444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:49.757 [2024-11-15 09:35:38.215580] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:49.757 [2024-11-15 09:35:38.215633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:49.757 pt2 00:16:49.757 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.757 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:49.757 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:49.757 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:49.757 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.757 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.018 [2024-11-15 09:35:38.226491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:50.018 [2024-11-15 09:35:38.226590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.018 [2024-11-15 09:35:38.226626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:50.018 [2024-11-15 09:35:38.226652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.018 [2024-11-15 09:35:38.227088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.018 [2024-11-15 09:35:38.227144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:50.018 [2024-11-15 09:35:38.227245] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:50.018 [2024-11-15 09:35:38.227300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:50.018 pt3 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.018 [2024-11-15 09:35:38.238443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:50.018 [2024-11-15 09:35:38.238497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.018 [2024-11-15 09:35:38.238518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:50.018 [2024-11-15 09:35:38.238526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.018 [2024-11-15 09:35:38.238957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.018 [2024-11-15 09:35:38.238976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:50.018 [2024-11-15 09:35:38.239042] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:50.018 [2024-11-15 09:35:38.239060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:50.018 [2024-11-15 09:35:38.239206] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:50.018 [2024-11-15 09:35:38.239215] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:50.018 [2024-11-15 09:35:38.239464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:50.018 [2024-11-15 09:35:38.246537] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:50.018 [2024-11-15 09:35:38.246561] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:50.018 [2024-11-15 09:35:38.246771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.018 pt4 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.018 "name": "raid_bdev1", 00:16:50.018 "uuid": "4f8b74fd-27b0-426a-94c6-f8fc79ecd122", 00:16:50.018 "strip_size_kb": 64, 00:16:50.018 "state": "online", 00:16:50.018 "raid_level": "raid5f", 00:16:50.018 "superblock": true, 00:16:50.018 "num_base_bdevs": 4, 00:16:50.018 "num_base_bdevs_discovered": 4, 00:16:50.018 "num_base_bdevs_operational": 4, 00:16:50.018 "base_bdevs_list": [ 00:16:50.018 { 00:16:50.018 "name": "pt1", 00:16:50.018 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:50.018 "is_configured": true, 00:16:50.018 "data_offset": 2048, 00:16:50.018 "data_size": 63488 00:16:50.018 }, 00:16:50.018 { 00:16:50.018 "name": "pt2", 00:16:50.018 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.018 "is_configured": true, 00:16:50.018 "data_offset": 2048, 00:16:50.018 "data_size": 63488 00:16:50.018 }, 00:16:50.018 { 00:16:50.018 "name": "pt3", 00:16:50.018 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:50.018 "is_configured": true, 00:16:50.018 "data_offset": 2048, 00:16:50.018 "data_size": 63488 00:16:50.018 }, 00:16:50.018 { 00:16:50.018 "name": "pt4", 00:16:50.018 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:50.018 "is_configured": true, 00:16:50.018 "data_offset": 2048, 00:16:50.018 "data_size": 63488 00:16:50.018 } 00:16:50.018 ] 00:16:50.018 }' 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.018 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.278 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:50.278 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:50.278 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:50.278 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:50.278 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:50.278 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:50.278 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:50.278 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.278 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.278 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:50.278 [2024-11-15 09:35:38.716025] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.278 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:50.543 "name": "raid_bdev1", 00:16:50.543 "aliases": [ 00:16:50.543 "4f8b74fd-27b0-426a-94c6-f8fc79ecd122" 00:16:50.543 ], 00:16:50.543 "product_name": "Raid Volume", 00:16:50.543 "block_size": 512, 00:16:50.543 "num_blocks": 190464, 00:16:50.543 "uuid": "4f8b74fd-27b0-426a-94c6-f8fc79ecd122", 00:16:50.543 "assigned_rate_limits": { 00:16:50.543 "rw_ios_per_sec": 0, 00:16:50.543 "rw_mbytes_per_sec": 0, 00:16:50.543 "r_mbytes_per_sec": 0, 00:16:50.543 "w_mbytes_per_sec": 0 00:16:50.543 }, 00:16:50.543 "claimed": false, 00:16:50.543 "zoned": false, 00:16:50.543 "supported_io_types": { 00:16:50.543 "read": true, 00:16:50.543 "write": true, 00:16:50.543 "unmap": false, 00:16:50.543 "flush": false, 00:16:50.543 "reset": true, 00:16:50.543 "nvme_admin": false, 00:16:50.543 "nvme_io": false, 00:16:50.543 "nvme_io_md": false, 00:16:50.543 "write_zeroes": true, 00:16:50.543 "zcopy": false, 00:16:50.543 "get_zone_info": false, 00:16:50.543 "zone_management": false, 00:16:50.543 "zone_append": false, 00:16:50.543 "compare": false, 00:16:50.543 "compare_and_write": false, 00:16:50.543 "abort": false, 00:16:50.543 "seek_hole": false, 00:16:50.543 "seek_data": false, 00:16:50.543 "copy": false, 00:16:50.543 "nvme_iov_md": false 00:16:50.543 }, 00:16:50.543 "driver_specific": { 00:16:50.543 "raid": { 00:16:50.543 "uuid": "4f8b74fd-27b0-426a-94c6-f8fc79ecd122", 00:16:50.543 "strip_size_kb": 64, 00:16:50.543 "state": "online", 00:16:50.543 "raid_level": "raid5f", 00:16:50.543 "superblock": true, 00:16:50.543 "num_base_bdevs": 4, 00:16:50.543 "num_base_bdevs_discovered": 4, 00:16:50.543 "num_base_bdevs_operational": 4, 00:16:50.543 "base_bdevs_list": [ 00:16:50.543 { 00:16:50.543 "name": "pt1", 00:16:50.543 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:50.543 "is_configured": true, 00:16:50.543 "data_offset": 2048, 00:16:50.543 "data_size": 63488 00:16:50.543 }, 00:16:50.543 { 00:16:50.543 "name": "pt2", 00:16:50.543 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.543 "is_configured": true, 00:16:50.543 "data_offset": 2048, 00:16:50.543 "data_size": 63488 00:16:50.543 }, 00:16:50.543 { 00:16:50.543 "name": "pt3", 00:16:50.543 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:50.543 "is_configured": true, 00:16:50.543 "data_offset": 2048, 00:16:50.543 "data_size": 63488 00:16:50.543 }, 00:16:50.543 { 00:16:50.543 "name": "pt4", 00:16:50.543 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:50.543 "is_configured": true, 00:16:50.543 "data_offset": 2048, 00:16:50.543 "data_size": 63488 00:16:50.543 } 00:16:50.543 ] 00:16:50.543 } 00:16:50.543 } 00:16:50.543 }' 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:50.543 pt2 00:16:50.543 pt3 00:16:50.543 pt4' 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.543 09:35:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:50.812 [2024-11-15 09:35:39.019436] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4f8b74fd-27b0-426a-94c6-f8fc79ecd122 '!=' 4f8b74fd-27b0-426a-94c6-f8fc79ecd122 ']' 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.812 [2024-11-15 09:35:39.071199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.812 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.812 "name": "raid_bdev1", 00:16:50.812 "uuid": "4f8b74fd-27b0-426a-94c6-f8fc79ecd122", 00:16:50.812 "strip_size_kb": 64, 00:16:50.812 "state": "online", 00:16:50.812 "raid_level": "raid5f", 00:16:50.812 "superblock": true, 00:16:50.812 "num_base_bdevs": 4, 00:16:50.812 "num_base_bdevs_discovered": 3, 00:16:50.812 "num_base_bdevs_operational": 3, 00:16:50.812 "base_bdevs_list": [ 00:16:50.812 { 00:16:50.812 "name": null, 00:16:50.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.812 "is_configured": false, 00:16:50.812 "data_offset": 0, 00:16:50.812 "data_size": 63488 00:16:50.812 }, 00:16:50.812 { 00:16:50.812 "name": "pt2", 00:16:50.812 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.812 "is_configured": true, 00:16:50.812 "data_offset": 2048, 00:16:50.812 "data_size": 63488 00:16:50.812 }, 00:16:50.812 { 00:16:50.812 "name": "pt3", 00:16:50.812 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:50.812 "is_configured": true, 00:16:50.813 "data_offset": 2048, 00:16:50.813 "data_size": 63488 00:16:50.813 }, 00:16:50.813 { 00:16:50.813 "name": "pt4", 00:16:50.813 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:50.813 "is_configured": true, 00:16:50.813 "data_offset": 2048, 00:16:50.813 "data_size": 63488 00:16:50.813 } 00:16:50.813 ] 00:16:50.813 }' 00:16:50.813 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.813 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.382 [2024-11-15 09:35:39.554309] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:51.382 [2024-11-15 09:35:39.554406] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:51.382 [2024-11-15 09:35:39.554530] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.382 [2024-11-15 09:35:39.554641] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.382 [2024-11-15 09:35:39.554702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.382 [2024-11-15 09:35:39.654095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:51.382 [2024-11-15 09:35:39.654227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.382 [2024-11-15 09:35:39.654253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:51.382 [2024-11-15 09:35:39.654263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.382 [2024-11-15 09:35:39.657077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.382 [2024-11-15 09:35:39.657129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:51.382 [2024-11-15 09:35:39.657219] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:51.382 [2024-11-15 09:35:39.657277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:51.382 pt2 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.382 "name": "raid_bdev1", 00:16:51.382 "uuid": "4f8b74fd-27b0-426a-94c6-f8fc79ecd122", 00:16:51.382 "strip_size_kb": 64, 00:16:51.382 "state": "configuring", 00:16:51.382 "raid_level": "raid5f", 00:16:51.382 "superblock": true, 00:16:51.382 "num_base_bdevs": 4, 00:16:51.382 "num_base_bdevs_discovered": 1, 00:16:51.382 "num_base_bdevs_operational": 3, 00:16:51.382 "base_bdevs_list": [ 00:16:51.382 { 00:16:51.382 "name": null, 00:16:51.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.382 "is_configured": false, 00:16:51.382 "data_offset": 2048, 00:16:51.382 "data_size": 63488 00:16:51.382 }, 00:16:51.382 { 00:16:51.382 "name": "pt2", 00:16:51.382 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.382 "is_configured": true, 00:16:51.382 "data_offset": 2048, 00:16:51.382 "data_size": 63488 00:16:51.382 }, 00:16:51.382 { 00:16:51.382 "name": null, 00:16:51.382 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:51.382 "is_configured": false, 00:16:51.382 "data_offset": 2048, 00:16:51.382 "data_size": 63488 00:16:51.382 }, 00:16:51.382 { 00:16:51.382 "name": null, 00:16:51.382 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:51.382 "is_configured": false, 00:16:51.382 "data_offset": 2048, 00:16:51.382 "data_size": 63488 00:16:51.382 } 00:16:51.382 ] 00:16:51.382 }' 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.382 09:35:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.647 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:51.647 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:51.647 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:51.647 09:35:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.647 09:35:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.647 [2024-11-15 09:35:40.105408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:51.647 [2024-11-15 09:35:40.105541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.647 [2024-11-15 09:35:40.105597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:51.647 [2024-11-15 09:35:40.105633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.647 [2024-11-15 09:35:40.106275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.647 [2024-11-15 09:35:40.106346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:51.647 [2024-11-15 09:35:40.106492] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:51.647 [2024-11-15 09:35:40.106561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:51.647 pt3 00:16:51.647 09:35:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.647 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:51.647 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.647 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.647 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.647 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.647 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.647 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.647 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.647 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.647 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.906 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.906 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.906 09:35:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.906 09:35:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.906 09:35:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.906 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.906 "name": "raid_bdev1", 00:16:51.906 "uuid": "4f8b74fd-27b0-426a-94c6-f8fc79ecd122", 00:16:51.906 "strip_size_kb": 64, 00:16:51.906 "state": "configuring", 00:16:51.906 "raid_level": "raid5f", 00:16:51.906 "superblock": true, 00:16:51.906 "num_base_bdevs": 4, 00:16:51.906 "num_base_bdevs_discovered": 2, 00:16:51.906 "num_base_bdevs_operational": 3, 00:16:51.906 "base_bdevs_list": [ 00:16:51.906 { 00:16:51.906 "name": null, 00:16:51.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.906 "is_configured": false, 00:16:51.906 "data_offset": 2048, 00:16:51.906 "data_size": 63488 00:16:51.906 }, 00:16:51.906 { 00:16:51.906 "name": "pt2", 00:16:51.906 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.906 "is_configured": true, 00:16:51.906 "data_offset": 2048, 00:16:51.906 "data_size": 63488 00:16:51.906 }, 00:16:51.906 { 00:16:51.906 "name": "pt3", 00:16:51.906 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:51.906 "is_configured": true, 00:16:51.906 "data_offset": 2048, 00:16:51.906 "data_size": 63488 00:16:51.906 }, 00:16:51.906 { 00:16:51.906 "name": null, 00:16:51.906 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:51.906 "is_configured": false, 00:16:51.906 "data_offset": 2048, 00:16:51.906 "data_size": 63488 00:16:51.906 } 00:16:51.906 ] 00:16:51.906 }' 00:16:51.906 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.906 09:35:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.167 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:52.167 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:52.167 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:52.167 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:52.167 09:35:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.167 09:35:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.167 [2024-11-15 09:35:40.568709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:52.167 [2024-11-15 09:35:40.568862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.167 [2024-11-15 09:35:40.568913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:52.167 [2024-11-15 09:35:40.568952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.167 [2024-11-15 09:35:40.569568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.167 [2024-11-15 09:35:40.569639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:52.167 [2024-11-15 09:35:40.569783] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:52.167 [2024-11-15 09:35:40.569816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:52.167 [2024-11-15 09:35:40.570004] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:52.167 [2024-11-15 09:35:40.570016] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:52.167 [2024-11-15 09:35:40.570342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:52.167 [2024-11-15 09:35:40.579060] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:52.167 [2024-11-15 09:35:40.579096] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:52.167 [2024-11-15 09:35:40.579508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.167 pt4 00:16:52.167 09:35:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.167 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:52.167 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.167 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.167 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.167 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.167 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.167 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.167 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.167 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.167 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.167 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.167 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.167 09:35:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.167 09:35:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.167 09:35:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.426 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.426 "name": "raid_bdev1", 00:16:52.426 "uuid": "4f8b74fd-27b0-426a-94c6-f8fc79ecd122", 00:16:52.426 "strip_size_kb": 64, 00:16:52.426 "state": "online", 00:16:52.426 "raid_level": "raid5f", 00:16:52.426 "superblock": true, 00:16:52.426 "num_base_bdevs": 4, 00:16:52.426 "num_base_bdevs_discovered": 3, 00:16:52.426 "num_base_bdevs_operational": 3, 00:16:52.426 "base_bdevs_list": [ 00:16:52.426 { 00:16:52.426 "name": null, 00:16:52.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.426 "is_configured": false, 00:16:52.426 "data_offset": 2048, 00:16:52.426 "data_size": 63488 00:16:52.426 }, 00:16:52.426 { 00:16:52.426 "name": "pt2", 00:16:52.426 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:52.426 "is_configured": true, 00:16:52.426 "data_offset": 2048, 00:16:52.426 "data_size": 63488 00:16:52.426 }, 00:16:52.426 { 00:16:52.426 "name": "pt3", 00:16:52.426 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:52.426 "is_configured": true, 00:16:52.426 "data_offset": 2048, 00:16:52.426 "data_size": 63488 00:16:52.426 }, 00:16:52.426 { 00:16:52.426 "name": "pt4", 00:16:52.426 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:52.426 "is_configured": true, 00:16:52.426 "data_offset": 2048, 00:16:52.426 "data_size": 63488 00:16:52.426 } 00:16:52.426 ] 00:16:52.426 }' 00:16:52.426 09:35:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.426 09:35:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.687 [2024-11-15 09:35:41.054809] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:52.687 [2024-11-15 09:35:41.054929] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:52.687 [2024-11-15 09:35:41.055073] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.687 [2024-11-15 09:35:41.055199] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.687 [2024-11-15 09:35:41.055298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.687 [2024-11-15 09:35:41.126639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:52.687 [2024-11-15 09:35:41.126724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.687 [2024-11-15 09:35:41.126756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:52.687 [2024-11-15 09:35:41.126770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.687 [2024-11-15 09:35:41.129785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.687 [2024-11-15 09:35:41.129875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:52.687 [2024-11-15 09:35:41.129997] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:52.687 [2024-11-15 09:35:41.130076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:52.687 pt1 00:16:52.687 [2024-11-15 09:35:41.130311] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:52.687 [2024-11-15 09:35:41.130328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:52.687 [2024-11-15 09:35:41.130345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:52.687 [2024-11-15 09:35:41.130410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:52.687 [2024-11-15 09:35:41.130528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.687 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.947 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.947 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.947 "name": "raid_bdev1", 00:16:52.947 "uuid": "4f8b74fd-27b0-426a-94c6-f8fc79ecd122", 00:16:52.947 "strip_size_kb": 64, 00:16:52.947 "state": "configuring", 00:16:52.947 "raid_level": "raid5f", 00:16:52.947 "superblock": true, 00:16:52.947 "num_base_bdevs": 4, 00:16:52.947 "num_base_bdevs_discovered": 2, 00:16:52.947 "num_base_bdevs_operational": 3, 00:16:52.947 "base_bdevs_list": [ 00:16:52.947 { 00:16:52.947 "name": null, 00:16:52.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.947 "is_configured": false, 00:16:52.947 "data_offset": 2048, 00:16:52.947 "data_size": 63488 00:16:52.947 }, 00:16:52.947 { 00:16:52.947 "name": "pt2", 00:16:52.947 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:52.947 "is_configured": true, 00:16:52.947 "data_offset": 2048, 00:16:52.947 "data_size": 63488 00:16:52.947 }, 00:16:52.947 { 00:16:52.947 "name": "pt3", 00:16:52.947 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:52.947 "is_configured": true, 00:16:52.947 "data_offset": 2048, 00:16:52.947 "data_size": 63488 00:16:52.947 }, 00:16:52.947 { 00:16:52.947 "name": null, 00:16:52.947 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:52.947 "is_configured": false, 00:16:52.947 "data_offset": 2048, 00:16:52.947 "data_size": 63488 00:16:52.947 } 00:16:52.947 ] 00:16:52.947 }' 00:16:52.948 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.948 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.207 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:53.207 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:53.207 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.207 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.207 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.207 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:53.207 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:53.207 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.207 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.207 [2024-11-15 09:35:41.641923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:53.208 [2024-11-15 09:35:41.642024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.208 [2024-11-15 09:35:41.642060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:53.208 [2024-11-15 09:35:41.642073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.208 [2024-11-15 09:35:41.642713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.208 [2024-11-15 09:35:41.642735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:53.208 [2024-11-15 09:35:41.642856] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:53.208 [2024-11-15 09:35:41.642916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:53.208 [2024-11-15 09:35:41.643109] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:53.208 [2024-11-15 09:35:41.643127] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:53.208 [2024-11-15 09:35:41.643464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:53.208 [2024-11-15 09:35:41.652764] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:53.208 [2024-11-15 09:35:41.652798] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:53.208 [2024-11-15 09:35:41.653269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.208 pt4 00:16:53.208 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.208 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:53.208 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.208 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.208 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.208 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.208 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:53.208 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.208 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.208 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.208 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.208 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.208 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.208 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.208 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.469 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.469 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.469 "name": "raid_bdev1", 00:16:53.469 "uuid": "4f8b74fd-27b0-426a-94c6-f8fc79ecd122", 00:16:53.469 "strip_size_kb": 64, 00:16:53.469 "state": "online", 00:16:53.469 "raid_level": "raid5f", 00:16:53.469 "superblock": true, 00:16:53.469 "num_base_bdevs": 4, 00:16:53.469 "num_base_bdevs_discovered": 3, 00:16:53.469 "num_base_bdevs_operational": 3, 00:16:53.469 "base_bdevs_list": [ 00:16:53.469 { 00:16:53.469 "name": null, 00:16:53.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.469 "is_configured": false, 00:16:53.469 "data_offset": 2048, 00:16:53.469 "data_size": 63488 00:16:53.469 }, 00:16:53.469 { 00:16:53.469 "name": "pt2", 00:16:53.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:53.469 "is_configured": true, 00:16:53.469 "data_offset": 2048, 00:16:53.469 "data_size": 63488 00:16:53.469 }, 00:16:53.469 { 00:16:53.469 "name": "pt3", 00:16:53.469 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:53.469 "is_configured": true, 00:16:53.469 "data_offset": 2048, 00:16:53.469 "data_size": 63488 00:16:53.469 }, 00:16:53.469 { 00:16:53.469 "name": "pt4", 00:16:53.469 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:53.469 "is_configured": true, 00:16:53.469 "data_offset": 2048, 00:16:53.469 "data_size": 63488 00:16:53.469 } 00:16:53.469 ] 00:16:53.469 }' 00:16:53.469 09:35:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.469 09:35:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.729 09:35:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:53.729 09:35:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:53.729 09:35:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.729 09:35:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.729 09:35:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.729 09:35:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:53.729 09:35:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:53.729 09:35:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:53.729 09:35:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.729 09:35:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.729 [2024-11-15 09:35:42.144571] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:53.729 09:35:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.729 09:35:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4f8b74fd-27b0-426a-94c6-f8fc79ecd122 '!=' 4f8b74fd-27b0-426a-94c6-f8fc79ecd122 ']' 00:16:53.729 09:35:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84542 00:16:53.729 09:35:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 84542 ']' 00:16:53.729 09:35:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 84542 00:16:53.989 09:35:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:16:53.989 09:35:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:53.989 09:35:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84542 00:16:53.989 09:35:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:53.989 09:35:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:53.989 09:35:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84542' 00:16:53.989 killing process with pid 84542 00:16:53.989 09:35:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 84542 00:16:53.989 [2024-11-15 09:35:42.243513] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:53.989 [2024-11-15 09:35:42.243668] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:53.989 09:35:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 84542 00:16:53.989 [2024-11-15 09:35:42.243780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:53.989 [2024-11-15 09:35:42.243798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:54.559 [2024-11-15 09:35:42.753959] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:55.982 09:35:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:55.982 00:16:55.982 real 0m9.045s 00:16:55.982 user 0m13.857s 00:16:55.982 sys 0m1.865s 00:16:55.982 09:35:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:55.982 09:35:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.982 ************************************ 00:16:55.982 END TEST raid5f_superblock_test 00:16:55.982 ************************************ 00:16:55.982 09:35:44 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:55.982 09:35:44 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:55.982 09:35:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:16:55.982 09:35:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:55.982 09:35:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:55.982 ************************************ 00:16:55.982 START TEST raid5f_rebuild_test 00:16:55.982 ************************************ 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 false false true 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85033 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85033 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 85033 ']' 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:55.982 09:35:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.982 [2024-11-15 09:35:44.200459] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:16:55.982 [2024-11-15 09:35:44.200649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85033 ] 00:16:55.982 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:55.982 Zero copy mechanism will not be used. 00:16:55.982 [2024-11-15 09:35:44.355722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.242 [2024-11-15 09:35:44.493997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.500 [2024-11-15 09:35:44.731293] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:56.500 [2024-11-15 09:35:44.731344] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:56.760 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:56.760 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:16:56.760 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:56.760 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:56.760 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.760 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.760 BaseBdev1_malloc 00:16:56.760 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.760 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:56.760 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.760 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.760 [2024-11-15 09:35:45.098904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:56.760 [2024-11-15 09:35:45.099044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.760 [2024-11-15 09:35:45.099096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:56.760 [2024-11-15 09:35:45.099139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.760 [2024-11-15 09:35:45.101908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.760 [2024-11-15 09:35:45.101985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:56.760 BaseBdev1 00:16:56.760 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.760 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:56.760 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:56.760 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.760 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.760 BaseBdev2_malloc 00:16:56.760 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.760 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:56.760 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.760 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.760 [2024-11-15 09:35:45.163024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:56.760 [2024-11-15 09:35:45.163107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.760 [2024-11-15 09:35:45.163132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:56.760 [2024-11-15 09:35:45.163145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.760 [2024-11-15 09:35:45.165741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.760 [2024-11-15 09:35:45.165792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:56.760 BaseBdev2 00:16:56.760 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.760 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:56.760 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:56.760 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.760 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.020 BaseBdev3_malloc 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.020 [2024-11-15 09:35:45.232827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:57.020 [2024-11-15 09:35:45.232958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.020 [2024-11-15 09:35:45.233026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:57.020 [2024-11-15 09:35:45.233073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.020 [2024-11-15 09:35:45.235594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.020 [2024-11-15 09:35:45.235673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:57.020 BaseBdev3 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.020 BaseBdev4_malloc 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.020 [2024-11-15 09:35:45.295260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:57.020 [2024-11-15 09:35:45.295396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.020 [2024-11-15 09:35:45.295446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:57.020 [2024-11-15 09:35:45.295488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.020 [2024-11-15 09:35:45.298234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.020 [2024-11-15 09:35:45.298314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:57.020 BaseBdev4 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.020 spare_malloc 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.020 spare_delay 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.020 [2024-11-15 09:35:45.370671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:57.020 [2024-11-15 09:35:45.370781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.020 [2024-11-15 09:35:45.370823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:57.020 [2024-11-15 09:35:45.370862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.020 [2024-11-15 09:35:45.373321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.020 [2024-11-15 09:35:45.373395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:57.020 spare 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.020 [2024-11-15 09:35:45.382717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:57.020 [2024-11-15 09:35:45.384953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:57.020 [2024-11-15 09:35:45.385056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:57.020 [2024-11-15 09:35:45.385132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:57.020 [2024-11-15 09:35:45.385261] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:57.020 [2024-11-15 09:35:45.385302] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:57.020 [2024-11-15 09:35:45.385593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:57.020 [2024-11-15 09:35:45.393743] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:57.020 [2024-11-15 09:35:45.393798] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:57.020 [2024-11-15 09:35:45.394052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.020 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.021 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.021 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.021 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.021 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.021 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.021 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.021 "name": "raid_bdev1", 00:16:57.021 "uuid": "da3d4394-0fe4-48cd-b977-4b8373d059ca", 00:16:57.021 "strip_size_kb": 64, 00:16:57.021 "state": "online", 00:16:57.021 "raid_level": "raid5f", 00:16:57.021 "superblock": false, 00:16:57.021 "num_base_bdevs": 4, 00:16:57.021 "num_base_bdevs_discovered": 4, 00:16:57.021 "num_base_bdevs_operational": 4, 00:16:57.021 "base_bdevs_list": [ 00:16:57.021 { 00:16:57.021 "name": "BaseBdev1", 00:16:57.021 "uuid": "0a3cb90c-5bb1-5b9d-9511-432618476164", 00:16:57.021 "is_configured": true, 00:16:57.021 "data_offset": 0, 00:16:57.021 "data_size": 65536 00:16:57.021 }, 00:16:57.021 { 00:16:57.021 "name": "BaseBdev2", 00:16:57.021 "uuid": "09672ecd-4c94-51e8-9c45-6471d340565f", 00:16:57.021 "is_configured": true, 00:16:57.021 "data_offset": 0, 00:16:57.021 "data_size": 65536 00:16:57.021 }, 00:16:57.021 { 00:16:57.021 "name": "BaseBdev3", 00:16:57.021 "uuid": "cbad051b-a736-510f-9dbc-f8d0de1844a5", 00:16:57.021 "is_configured": true, 00:16:57.021 "data_offset": 0, 00:16:57.021 "data_size": 65536 00:16:57.021 }, 00:16:57.021 { 00:16:57.021 "name": "BaseBdev4", 00:16:57.021 "uuid": "51685deb-f101-5e8e-9921-c6d4b0cdfb47", 00:16:57.021 "is_configured": true, 00:16:57.021 "data_offset": 0, 00:16:57.021 "data_size": 65536 00:16:57.021 } 00:16:57.021 ] 00:16:57.021 }' 00:16:57.021 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.021 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.590 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:57.590 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:57.590 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.590 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.590 [2024-11-15 09:35:45.831374] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:57.590 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.590 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:57.590 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.590 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.590 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:57.590 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.590 09:35:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.590 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:57.590 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:57.590 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:57.590 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:57.590 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:57.590 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:57.590 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:57.590 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:57.590 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:57.590 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:57.590 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:57.590 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:57.590 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:57.590 09:35:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:57.850 [2024-11-15 09:35:46.114736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:57.850 /dev/nbd0 00:16:57.850 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:57.850 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:57.850 09:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:57.850 09:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:16:57.850 09:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:57.850 09:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:57.850 09:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:57.850 09:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:16:57.850 09:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:57.850 09:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:57.850 09:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:57.850 1+0 records in 00:16:57.850 1+0 records out 00:16:57.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000663937 s, 6.2 MB/s 00:16:57.850 09:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:57.850 09:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:16:57.850 09:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:57.850 09:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:57.850 09:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:16:57.850 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:57.850 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:57.850 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:57.850 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:57.850 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:57.850 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:58.419 512+0 records in 00:16:58.419 512+0 records out 00:16:58.419 100663296 bytes (101 MB, 96 MiB) copied, 0.499245 s, 202 MB/s 00:16:58.419 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:58.419 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:58.419 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:58.419 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:58.419 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:58.419 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:58.419 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:58.680 [2024-11-15 09:35:46.939188] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.680 [2024-11-15 09:35:46.957507] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.680 09:35:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.680 09:35:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.680 "name": "raid_bdev1", 00:16:58.680 "uuid": "da3d4394-0fe4-48cd-b977-4b8373d059ca", 00:16:58.680 "strip_size_kb": 64, 00:16:58.680 "state": "online", 00:16:58.680 "raid_level": "raid5f", 00:16:58.680 "superblock": false, 00:16:58.680 "num_base_bdevs": 4, 00:16:58.680 "num_base_bdevs_discovered": 3, 00:16:58.680 "num_base_bdevs_operational": 3, 00:16:58.680 "base_bdevs_list": [ 00:16:58.680 { 00:16:58.680 "name": null, 00:16:58.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.680 "is_configured": false, 00:16:58.680 "data_offset": 0, 00:16:58.680 "data_size": 65536 00:16:58.680 }, 00:16:58.680 { 00:16:58.680 "name": "BaseBdev2", 00:16:58.680 "uuid": "09672ecd-4c94-51e8-9c45-6471d340565f", 00:16:58.680 "is_configured": true, 00:16:58.680 "data_offset": 0, 00:16:58.680 "data_size": 65536 00:16:58.680 }, 00:16:58.680 { 00:16:58.680 "name": "BaseBdev3", 00:16:58.680 "uuid": "cbad051b-a736-510f-9dbc-f8d0de1844a5", 00:16:58.680 "is_configured": true, 00:16:58.680 "data_offset": 0, 00:16:58.680 "data_size": 65536 00:16:58.680 }, 00:16:58.680 { 00:16:58.680 "name": "BaseBdev4", 00:16:58.680 "uuid": "51685deb-f101-5e8e-9921-c6d4b0cdfb47", 00:16:58.680 "is_configured": true, 00:16:58.680 "data_offset": 0, 00:16:58.680 "data_size": 65536 00:16:58.680 } 00:16:58.680 ] 00:16:58.680 }' 00:16:58.680 09:35:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.680 09:35:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.249 09:35:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:59.249 09:35:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.249 09:35:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.249 [2024-11-15 09:35:47.460701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:59.249 [2024-11-15 09:35:47.480128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:59.249 09:35:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.249 09:35:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:59.249 [2024-11-15 09:35:47.491049] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:00.197 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:00.197 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.197 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:00.197 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:00.197 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.197 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.197 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.197 09:35:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.197 09:35:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.197 09:35:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.197 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.197 "name": "raid_bdev1", 00:17:00.197 "uuid": "da3d4394-0fe4-48cd-b977-4b8373d059ca", 00:17:00.197 "strip_size_kb": 64, 00:17:00.197 "state": "online", 00:17:00.197 "raid_level": "raid5f", 00:17:00.197 "superblock": false, 00:17:00.197 "num_base_bdevs": 4, 00:17:00.197 "num_base_bdevs_discovered": 4, 00:17:00.197 "num_base_bdevs_operational": 4, 00:17:00.197 "process": { 00:17:00.197 "type": "rebuild", 00:17:00.197 "target": "spare", 00:17:00.197 "progress": { 00:17:00.197 "blocks": 19200, 00:17:00.197 "percent": 9 00:17:00.197 } 00:17:00.197 }, 00:17:00.197 "base_bdevs_list": [ 00:17:00.197 { 00:17:00.197 "name": "spare", 00:17:00.197 "uuid": "6082e130-07dc-522a-a92e-7057419af9ad", 00:17:00.197 "is_configured": true, 00:17:00.197 "data_offset": 0, 00:17:00.197 "data_size": 65536 00:17:00.197 }, 00:17:00.197 { 00:17:00.197 "name": "BaseBdev2", 00:17:00.197 "uuid": "09672ecd-4c94-51e8-9c45-6471d340565f", 00:17:00.197 "is_configured": true, 00:17:00.197 "data_offset": 0, 00:17:00.197 "data_size": 65536 00:17:00.197 }, 00:17:00.197 { 00:17:00.197 "name": "BaseBdev3", 00:17:00.197 "uuid": "cbad051b-a736-510f-9dbc-f8d0de1844a5", 00:17:00.197 "is_configured": true, 00:17:00.197 "data_offset": 0, 00:17:00.197 "data_size": 65536 00:17:00.197 }, 00:17:00.197 { 00:17:00.197 "name": "BaseBdev4", 00:17:00.197 "uuid": "51685deb-f101-5e8e-9921-c6d4b0cdfb47", 00:17:00.197 "is_configured": true, 00:17:00.197 "data_offset": 0, 00:17:00.197 "data_size": 65536 00:17:00.197 } 00:17:00.197 ] 00:17:00.197 }' 00:17:00.197 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.197 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:00.197 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.197 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:00.197 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:00.197 09:35:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.197 09:35:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.198 [2024-11-15 09:35:48.622567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:00.456 [2024-11-15 09:35:48.702176] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:00.456 [2024-11-15 09:35:48.702311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.457 [2024-11-15 09:35:48.702336] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:00.457 [2024-11-15 09:35:48.702350] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:00.457 09:35:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.457 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:00.457 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.457 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.457 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.457 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.457 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:00.457 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.457 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.457 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.457 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.457 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.457 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.457 09:35:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.457 09:35:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.457 09:35:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.457 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.457 "name": "raid_bdev1", 00:17:00.457 "uuid": "da3d4394-0fe4-48cd-b977-4b8373d059ca", 00:17:00.457 "strip_size_kb": 64, 00:17:00.457 "state": "online", 00:17:00.457 "raid_level": "raid5f", 00:17:00.457 "superblock": false, 00:17:00.457 "num_base_bdevs": 4, 00:17:00.457 "num_base_bdevs_discovered": 3, 00:17:00.457 "num_base_bdevs_operational": 3, 00:17:00.457 "base_bdevs_list": [ 00:17:00.457 { 00:17:00.457 "name": null, 00:17:00.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.457 "is_configured": false, 00:17:00.457 "data_offset": 0, 00:17:00.457 "data_size": 65536 00:17:00.457 }, 00:17:00.457 { 00:17:00.457 "name": "BaseBdev2", 00:17:00.457 "uuid": "09672ecd-4c94-51e8-9c45-6471d340565f", 00:17:00.457 "is_configured": true, 00:17:00.457 "data_offset": 0, 00:17:00.457 "data_size": 65536 00:17:00.457 }, 00:17:00.457 { 00:17:00.457 "name": "BaseBdev3", 00:17:00.457 "uuid": "cbad051b-a736-510f-9dbc-f8d0de1844a5", 00:17:00.457 "is_configured": true, 00:17:00.457 "data_offset": 0, 00:17:00.457 "data_size": 65536 00:17:00.457 }, 00:17:00.457 { 00:17:00.457 "name": "BaseBdev4", 00:17:00.457 "uuid": "51685deb-f101-5e8e-9921-c6d4b0cdfb47", 00:17:00.457 "is_configured": true, 00:17:00.457 "data_offset": 0, 00:17:00.457 "data_size": 65536 00:17:00.457 } 00:17:00.457 ] 00:17:00.457 }' 00:17:00.457 09:35:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.457 09:35:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.026 09:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:01.026 09:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.026 09:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:01.026 09:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:01.026 09:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.026 09:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.026 09:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.026 09:35:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.026 09:35:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.026 09:35:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.026 09:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.026 "name": "raid_bdev1", 00:17:01.026 "uuid": "da3d4394-0fe4-48cd-b977-4b8373d059ca", 00:17:01.026 "strip_size_kb": 64, 00:17:01.026 "state": "online", 00:17:01.026 "raid_level": "raid5f", 00:17:01.026 "superblock": false, 00:17:01.026 "num_base_bdevs": 4, 00:17:01.026 "num_base_bdevs_discovered": 3, 00:17:01.026 "num_base_bdevs_operational": 3, 00:17:01.026 "base_bdevs_list": [ 00:17:01.026 { 00:17:01.026 "name": null, 00:17:01.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.026 "is_configured": false, 00:17:01.026 "data_offset": 0, 00:17:01.026 "data_size": 65536 00:17:01.026 }, 00:17:01.026 { 00:17:01.026 "name": "BaseBdev2", 00:17:01.026 "uuid": "09672ecd-4c94-51e8-9c45-6471d340565f", 00:17:01.026 "is_configured": true, 00:17:01.026 "data_offset": 0, 00:17:01.026 "data_size": 65536 00:17:01.026 }, 00:17:01.026 { 00:17:01.026 "name": "BaseBdev3", 00:17:01.026 "uuid": "cbad051b-a736-510f-9dbc-f8d0de1844a5", 00:17:01.026 "is_configured": true, 00:17:01.026 "data_offset": 0, 00:17:01.026 "data_size": 65536 00:17:01.026 }, 00:17:01.026 { 00:17:01.026 "name": "BaseBdev4", 00:17:01.026 "uuid": "51685deb-f101-5e8e-9921-c6d4b0cdfb47", 00:17:01.026 "is_configured": true, 00:17:01.026 "data_offset": 0, 00:17:01.026 "data_size": 65536 00:17:01.026 } 00:17:01.026 ] 00:17:01.026 }' 00:17:01.026 09:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.026 09:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:01.026 09:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.026 09:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:01.026 09:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:01.026 09:35:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.026 09:35:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.026 [2024-11-15 09:35:49.366680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:01.026 [2024-11-15 09:35:49.384492] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:17:01.026 09:35:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.026 09:35:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:01.026 [2024-11-15 09:35:49.395226] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:01.965 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.965 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.965 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.965 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.965 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.965 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.965 09:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.965 09:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.965 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.965 09:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.224 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.224 "name": "raid_bdev1", 00:17:02.224 "uuid": "da3d4394-0fe4-48cd-b977-4b8373d059ca", 00:17:02.224 "strip_size_kb": 64, 00:17:02.224 "state": "online", 00:17:02.224 "raid_level": "raid5f", 00:17:02.224 "superblock": false, 00:17:02.224 "num_base_bdevs": 4, 00:17:02.224 "num_base_bdevs_discovered": 4, 00:17:02.224 "num_base_bdevs_operational": 4, 00:17:02.224 "process": { 00:17:02.224 "type": "rebuild", 00:17:02.224 "target": "spare", 00:17:02.224 "progress": { 00:17:02.224 "blocks": 17280, 00:17:02.224 "percent": 8 00:17:02.224 } 00:17:02.224 }, 00:17:02.224 "base_bdevs_list": [ 00:17:02.224 { 00:17:02.224 "name": "spare", 00:17:02.224 "uuid": "6082e130-07dc-522a-a92e-7057419af9ad", 00:17:02.224 "is_configured": true, 00:17:02.224 "data_offset": 0, 00:17:02.224 "data_size": 65536 00:17:02.224 }, 00:17:02.224 { 00:17:02.224 "name": "BaseBdev2", 00:17:02.224 "uuid": "09672ecd-4c94-51e8-9c45-6471d340565f", 00:17:02.224 "is_configured": true, 00:17:02.224 "data_offset": 0, 00:17:02.224 "data_size": 65536 00:17:02.224 }, 00:17:02.224 { 00:17:02.224 "name": "BaseBdev3", 00:17:02.224 "uuid": "cbad051b-a736-510f-9dbc-f8d0de1844a5", 00:17:02.224 "is_configured": true, 00:17:02.224 "data_offset": 0, 00:17:02.224 "data_size": 65536 00:17:02.224 }, 00:17:02.224 { 00:17:02.224 "name": "BaseBdev4", 00:17:02.224 "uuid": "51685deb-f101-5e8e-9921-c6d4b0cdfb47", 00:17:02.224 "is_configured": true, 00:17:02.224 "data_offset": 0, 00:17:02.224 "data_size": 65536 00:17:02.224 } 00:17:02.224 ] 00:17:02.224 }' 00:17:02.224 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.224 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:02.225 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.225 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:02.225 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:02.225 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:02.225 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:02.225 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=644 00:17:02.225 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:02.225 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:02.225 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.225 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:02.225 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:02.225 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.225 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.225 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.225 09:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.225 09:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.225 09:35:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.225 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.225 "name": "raid_bdev1", 00:17:02.225 "uuid": "da3d4394-0fe4-48cd-b977-4b8373d059ca", 00:17:02.225 "strip_size_kb": 64, 00:17:02.225 "state": "online", 00:17:02.225 "raid_level": "raid5f", 00:17:02.225 "superblock": false, 00:17:02.225 "num_base_bdevs": 4, 00:17:02.225 "num_base_bdevs_discovered": 4, 00:17:02.225 "num_base_bdevs_operational": 4, 00:17:02.225 "process": { 00:17:02.225 "type": "rebuild", 00:17:02.225 "target": "spare", 00:17:02.225 "progress": { 00:17:02.225 "blocks": 21120, 00:17:02.225 "percent": 10 00:17:02.225 } 00:17:02.225 }, 00:17:02.225 "base_bdevs_list": [ 00:17:02.225 { 00:17:02.225 "name": "spare", 00:17:02.225 "uuid": "6082e130-07dc-522a-a92e-7057419af9ad", 00:17:02.225 "is_configured": true, 00:17:02.225 "data_offset": 0, 00:17:02.225 "data_size": 65536 00:17:02.225 }, 00:17:02.225 { 00:17:02.225 "name": "BaseBdev2", 00:17:02.225 "uuid": "09672ecd-4c94-51e8-9c45-6471d340565f", 00:17:02.225 "is_configured": true, 00:17:02.225 "data_offset": 0, 00:17:02.225 "data_size": 65536 00:17:02.225 }, 00:17:02.225 { 00:17:02.225 "name": "BaseBdev3", 00:17:02.225 "uuid": "cbad051b-a736-510f-9dbc-f8d0de1844a5", 00:17:02.225 "is_configured": true, 00:17:02.225 "data_offset": 0, 00:17:02.225 "data_size": 65536 00:17:02.225 }, 00:17:02.225 { 00:17:02.225 "name": "BaseBdev4", 00:17:02.225 "uuid": "51685deb-f101-5e8e-9921-c6d4b0cdfb47", 00:17:02.225 "is_configured": true, 00:17:02.225 "data_offset": 0, 00:17:02.225 "data_size": 65536 00:17:02.225 } 00:17:02.225 ] 00:17:02.225 }' 00:17:02.225 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.225 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:02.225 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.225 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:02.225 09:35:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:03.604 09:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:03.604 09:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.604 09:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.604 09:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.604 09:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.604 09:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.604 09:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.604 09:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.604 09:35:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.604 09:35:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.604 09:35:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.604 09:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.604 "name": "raid_bdev1", 00:17:03.604 "uuid": "da3d4394-0fe4-48cd-b977-4b8373d059ca", 00:17:03.604 "strip_size_kb": 64, 00:17:03.604 "state": "online", 00:17:03.604 "raid_level": "raid5f", 00:17:03.604 "superblock": false, 00:17:03.604 "num_base_bdevs": 4, 00:17:03.604 "num_base_bdevs_discovered": 4, 00:17:03.604 "num_base_bdevs_operational": 4, 00:17:03.604 "process": { 00:17:03.604 "type": "rebuild", 00:17:03.604 "target": "spare", 00:17:03.604 "progress": { 00:17:03.604 "blocks": 42240, 00:17:03.604 "percent": 21 00:17:03.604 } 00:17:03.604 }, 00:17:03.604 "base_bdevs_list": [ 00:17:03.604 { 00:17:03.604 "name": "spare", 00:17:03.604 "uuid": "6082e130-07dc-522a-a92e-7057419af9ad", 00:17:03.604 "is_configured": true, 00:17:03.604 "data_offset": 0, 00:17:03.604 "data_size": 65536 00:17:03.604 }, 00:17:03.604 { 00:17:03.604 "name": "BaseBdev2", 00:17:03.604 "uuid": "09672ecd-4c94-51e8-9c45-6471d340565f", 00:17:03.604 "is_configured": true, 00:17:03.604 "data_offset": 0, 00:17:03.604 "data_size": 65536 00:17:03.604 }, 00:17:03.604 { 00:17:03.604 "name": "BaseBdev3", 00:17:03.604 "uuid": "cbad051b-a736-510f-9dbc-f8d0de1844a5", 00:17:03.605 "is_configured": true, 00:17:03.605 "data_offset": 0, 00:17:03.605 "data_size": 65536 00:17:03.605 }, 00:17:03.605 { 00:17:03.605 "name": "BaseBdev4", 00:17:03.605 "uuid": "51685deb-f101-5e8e-9921-c6d4b0cdfb47", 00:17:03.605 "is_configured": true, 00:17:03.605 "data_offset": 0, 00:17:03.605 "data_size": 65536 00:17:03.605 } 00:17:03.605 ] 00:17:03.605 }' 00:17:03.605 09:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.605 09:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.605 09:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.605 09:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.605 09:35:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:04.543 09:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:04.543 09:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.543 09:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.543 09:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.543 09:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.543 09:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.543 09:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.543 09:35:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.543 09:35:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.543 09:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.544 09:35:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.544 09:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.544 "name": "raid_bdev1", 00:17:04.544 "uuid": "da3d4394-0fe4-48cd-b977-4b8373d059ca", 00:17:04.544 "strip_size_kb": 64, 00:17:04.544 "state": "online", 00:17:04.544 "raid_level": "raid5f", 00:17:04.544 "superblock": false, 00:17:04.544 "num_base_bdevs": 4, 00:17:04.544 "num_base_bdevs_discovered": 4, 00:17:04.544 "num_base_bdevs_operational": 4, 00:17:04.544 "process": { 00:17:04.544 "type": "rebuild", 00:17:04.544 "target": "spare", 00:17:04.544 "progress": { 00:17:04.544 "blocks": 65280, 00:17:04.544 "percent": 33 00:17:04.544 } 00:17:04.544 }, 00:17:04.544 "base_bdevs_list": [ 00:17:04.544 { 00:17:04.544 "name": "spare", 00:17:04.544 "uuid": "6082e130-07dc-522a-a92e-7057419af9ad", 00:17:04.544 "is_configured": true, 00:17:04.544 "data_offset": 0, 00:17:04.544 "data_size": 65536 00:17:04.544 }, 00:17:04.544 { 00:17:04.544 "name": "BaseBdev2", 00:17:04.544 "uuid": "09672ecd-4c94-51e8-9c45-6471d340565f", 00:17:04.544 "is_configured": true, 00:17:04.544 "data_offset": 0, 00:17:04.544 "data_size": 65536 00:17:04.544 }, 00:17:04.544 { 00:17:04.544 "name": "BaseBdev3", 00:17:04.544 "uuid": "cbad051b-a736-510f-9dbc-f8d0de1844a5", 00:17:04.544 "is_configured": true, 00:17:04.544 "data_offset": 0, 00:17:04.544 "data_size": 65536 00:17:04.544 }, 00:17:04.544 { 00:17:04.544 "name": "BaseBdev4", 00:17:04.544 "uuid": "51685deb-f101-5e8e-9921-c6d4b0cdfb47", 00:17:04.544 "is_configured": true, 00:17:04.544 "data_offset": 0, 00:17:04.544 "data_size": 65536 00:17:04.544 } 00:17:04.544 ] 00:17:04.544 }' 00:17:04.544 09:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.544 09:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.544 09:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.544 09:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.544 09:35:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:05.923 09:35:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:05.923 09:35:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.923 09:35:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.923 09:35:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.923 09:35:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.923 09:35:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.923 09:35:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.923 09:35:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.923 09:35:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.923 09:35:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.923 09:35:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.923 09:35:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.923 "name": "raid_bdev1", 00:17:05.923 "uuid": "da3d4394-0fe4-48cd-b977-4b8373d059ca", 00:17:05.923 "strip_size_kb": 64, 00:17:05.923 "state": "online", 00:17:05.923 "raid_level": "raid5f", 00:17:05.923 "superblock": false, 00:17:05.923 "num_base_bdevs": 4, 00:17:05.923 "num_base_bdevs_discovered": 4, 00:17:05.923 "num_base_bdevs_operational": 4, 00:17:05.923 "process": { 00:17:05.923 "type": "rebuild", 00:17:05.923 "target": "spare", 00:17:05.923 "progress": { 00:17:05.923 "blocks": 86400, 00:17:05.923 "percent": 43 00:17:05.923 } 00:17:05.923 }, 00:17:05.923 "base_bdevs_list": [ 00:17:05.923 { 00:17:05.923 "name": "spare", 00:17:05.923 "uuid": "6082e130-07dc-522a-a92e-7057419af9ad", 00:17:05.923 "is_configured": true, 00:17:05.923 "data_offset": 0, 00:17:05.923 "data_size": 65536 00:17:05.923 }, 00:17:05.923 { 00:17:05.923 "name": "BaseBdev2", 00:17:05.923 "uuid": "09672ecd-4c94-51e8-9c45-6471d340565f", 00:17:05.923 "is_configured": true, 00:17:05.923 "data_offset": 0, 00:17:05.923 "data_size": 65536 00:17:05.923 }, 00:17:05.923 { 00:17:05.923 "name": "BaseBdev3", 00:17:05.923 "uuid": "cbad051b-a736-510f-9dbc-f8d0de1844a5", 00:17:05.923 "is_configured": true, 00:17:05.923 "data_offset": 0, 00:17:05.923 "data_size": 65536 00:17:05.923 }, 00:17:05.923 { 00:17:05.923 "name": "BaseBdev4", 00:17:05.923 "uuid": "51685deb-f101-5e8e-9921-c6d4b0cdfb47", 00:17:05.923 "is_configured": true, 00:17:05.923 "data_offset": 0, 00:17:05.923 "data_size": 65536 00:17:05.923 } 00:17:05.923 ] 00:17:05.923 }' 00:17:05.923 09:35:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.923 09:35:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.923 09:35:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.923 09:35:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.923 09:35:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:06.860 09:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:06.860 09:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.860 09:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.860 09:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.860 09:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.860 09:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.860 09:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.860 09:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.860 09:35:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.860 09:35:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.860 09:35:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.860 09:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.860 "name": "raid_bdev1", 00:17:06.860 "uuid": "da3d4394-0fe4-48cd-b977-4b8373d059ca", 00:17:06.860 "strip_size_kb": 64, 00:17:06.860 "state": "online", 00:17:06.860 "raid_level": "raid5f", 00:17:06.860 "superblock": false, 00:17:06.860 "num_base_bdevs": 4, 00:17:06.860 "num_base_bdevs_discovered": 4, 00:17:06.860 "num_base_bdevs_operational": 4, 00:17:06.860 "process": { 00:17:06.860 "type": "rebuild", 00:17:06.860 "target": "spare", 00:17:06.860 "progress": { 00:17:06.860 "blocks": 109440, 00:17:06.860 "percent": 55 00:17:06.860 } 00:17:06.860 }, 00:17:06.860 "base_bdevs_list": [ 00:17:06.860 { 00:17:06.860 "name": "spare", 00:17:06.860 "uuid": "6082e130-07dc-522a-a92e-7057419af9ad", 00:17:06.860 "is_configured": true, 00:17:06.860 "data_offset": 0, 00:17:06.860 "data_size": 65536 00:17:06.860 }, 00:17:06.860 { 00:17:06.860 "name": "BaseBdev2", 00:17:06.860 "uuid": "09672ecd-4c94-51e8-9c45-6471d340565f", 00:17:06.860 "is_configured": true, 00:17:06.860 "data_offset": 0, 00:17:06.860 "data_size": 65536 00:17:06.860 }, 00:17:06.860 { 00:17:06.860 "name": "BaseBdev3", 00:17:06.861 "uuid": "cbad051b-a736-510f-9dbc-f8d0de1844a5", 00:17:06.861 "is_configured": true, 00:17:06.861 "data_offset": 0, 00:17:06.861 "data_size": 65536 00:17:06.861 }, 00:17:06.861 { 00:17:06.861 "name": "BaseBdev4", 00:17:06.861 "uuid": "51685deb-f101-5e8e-9921-c6d4b0cdfb47", 00:17:06.861 "is_configured": true, 00:17:06.861 "data_offset": 0, 00:17:06.861 "data_size": 65536 00:17:06.861 } 00:17:06.861 ] 00:17:06.861 }' 00:17:06.861 09:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.861 09:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.861 09:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.861 09:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.861 09:35:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:08.240 09:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:08.240 09:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.240 09:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.240 09:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.240 09:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.240 09:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.240 09:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.240 09:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.240 09:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.240 09:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.240 09:35:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.240 09:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.240 "name": "raid_bdev1", 00:17:08.240 "uuid": "da3d4394-0fe4-48cd-b977-4b8373d059ca", 00:17:08.240 "strip_size_kb": 64, 00:17:08.240 "state": "online", 00:17:08.240 "raid_level": "raid5f", 00:17:08.240 "superblock": false, 00:17:08.240 "num_base_bdevs": 4, 00:17:08.240 "num_base_bdevs_discovered": 4, 00:17:08.240 "num_base_bdevs_operational": 4, 00:17:08.240 "process": { 00:17:08.240 "type": "rebuild", 00:17:08.240 "target": "spare", 00:17:08.240 "progress": { 00:17:08.240 "blocks": 130560, 00:17:08.240 "percent": 66 00:17:08.240 } 00:17:08.240 }, 00:17:08.240 "base_bdevs_list": [ 00:17:08.240 { 00:17:08.240 "name": "spare", 00:17:08.240 "uuid": "6082e130-07dc-522a-a92e-7057419af9ad", 00:17:08.240 "is_configured": true, 00:17:08.240 "data_offset": 0, 00:17:08.240 "data_size": 65536 00:17:08.240 }, 00:17:08.240 { 00:17:08.240 "name": "BaseBdev2", 00:17:08.240 "uuid": "09672ecd-4c94-51e8-9c45-6471d340565f", 00:17:08.240 "is_configured": true, 00:17:08.240 "data_offset": 0, 00:17:08.240 "data_size": 65536 00:17:08.240 }, 00:17:08.240 { 00:17:08.240 "name": "BaseBdev3", 00:17:08.240 "uuid": "cbad051b-a736-510f-9dbc-f8d0de1844a5", 00:17:08.240 "is_configured": true, 00:17:08.240 "data_offset": 0, 00:17:08.240 "data_size": 65536 00:17:08.240 }, 00:17:08.240 { 00:17:08.240 "name": "BaseBdev4", 00:17:08.240 "uuid": "51685deb-f101-5e8e-9921-c6d4b0cdfb47", 00:17:08.240 "is_configured": true, 00:17:08.240 "data_offset": 0, 00:17:08.240 "data_size": 65536 00:17:08.240 } 00:17:08.240 ] 00:17:08.240 }' 00:17:08.240 09:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.240 09:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.240 09:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.240 09:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.240 09:35:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:09.179 09:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:09.179 09:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.179 09:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.179 09:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.179 09:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.179 09:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.179 09:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.179 09:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.179 09:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.179 09:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.179 09:35:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.179 09:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.179 "name": "raid_bdev1", 00:17:09.179 "uuid": "da3d4394-0fe4-48cd-b977-4b8373d059ca", 00:17:09.179 "strip_size_kb": 64, 00:17:09.179 "state": "online", 00:17:09.179 "raid_level": "raid5f", 00:17:09.179 "superblock": false, 00:17:09.179 "num_base_bdevs": 4, 00:17:09.179 "num_base_bdevs_discovered": 4, 00:17:09.179 "num_base_bdevs_operational": 4, 00:17:09.179 "process": { 00:17:09.179 "type": "rebuild", 00:17:09.179 "target": "spare", 00:17:09.179 "progress": { 00:17:09.179 "blocks": 153600, 00:17:09.179 "percent": 78 00:17:09.179 } 00:17:09.179 }, 00:17:09.179 "base_bdevs_list": [ 00:17:09.179 { 00:17:09.179 "name": "spare", 00:17:09.179 "uuid": "6082e130-07dc-522a-a92e-7057419af9ad", 00:17:09.179 "is_configured": true, 00:17:09.179 "data_offset": 0, 00:17:09.179 "data_size": 65536 00:17:09.179 }, 00:17:09.179 { 00:17:09.179 "name": "BaseBdev2", 00:17:09.179 "uuid": "09672ecd-4c94-51e8-9c45-6471d340565f", 00:17:09.179 "is_configured": true, 00:17:09.179 "data_offset": 0, 00:17:09.179 "data_size": 65536 00:17:09.179 }, 00:17:09.179 { 00:17:09.179 "name": "BaseBdev3", 00:17:09.179 "uuid": "cbad051b-a736-510f-9dbc-f8d0de1844a5", 00:17:09.179 "is_configured": true, 00:17:09.179 "data_offset": 0, 00:17:09.179 "data_size": 65536 00:17:09.179 }, 00:17:09.179 { 00:17:09.179 "name": "BaseBdev4", 00:17:09.179 "uuid": "51685deb-f101-5e8e-9921-c6d4b0cdfb47", 00:17:09.179 "is_configured": true, 00:17:09.179 "data_offset": 0, 00:17:09.179 "data_size": 65536 00:17:09.179 } 00:17:09.179 ] 00:17:09.179 }' 00:17:09.179 09:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.179 09:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.179 09:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.179 09:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.179 09:35:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:10.558 09:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:10.558 09:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.558 09:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.558 09:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.558 09:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.558 09:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.558 09:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.558 09:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.558 09:35:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.558 09:35:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.558 09:35:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.558 09:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.558 "name": "raid_bdev1", 00:17:10.558 "uuid": "da3d4394-0fe4-48cd-b977-4b8373d059ca", 00:17:10.558 "strip_size_kb": 64, 00:17:10.558 "state": "online", 00:17:10.558 "raid_level": "raid5f", 00:17:10.558 "superblock": false, 00:17:10.558 "num_base_bdevs": 4, 00:17:10.558 "num_base_bdevs_discovered": 4, 00:17:10.558 "num_base_bdevs_operational": 4, 00:17:10.558 "process": { 00:17:10.558 "type": "rebuild", 00:17:10.558 "target": "spare", 00:17:10.558 "progress": { 00:17:10.558 "blocks": 174720, 00:17:10.558 "percent": 88 00:17:10.558 } 00:17:10.558 }, 00:17:10.558 "base_bdevs_list": [ 00:17:10.558 { 00:17:10.558 "name": "spare", 00:17:10.558 "uuid": "6082e130-07dc-522a-a92e-7057419af9ad", 00:17:10.558 "is_configured": true, 00:17:10.558 "data_offset": 0, 00:17:10.558 "data_size": 65536 00:17:10.558 }, 00:17:10.558 { 00:17:10.558 "name": "BaseBdev2", 00:17:10.558 "uuid": "09672ecd-4c94-51e8-9c45-6471d340565f", 00:17:10.558 "is_configured": true, 00:17:10.558 "data_offset": 0, 00:17:10.558 "data_size": 65536 00:17:10.558 }, 00:17:10.558 { 00:17:10.558 "name": "BaseBdev3", 00:17:10.558 "uuid": "cbad051b-a736-510f-9dbc-f8d0de1844a5", 00:17:10.558 "is_configured": true, 00:17:10.558 "data_offset": 0, 00:17:10.558 "data_size": 65536 00:17:10.558 }, 00:17:10.558 { 00:17:10.558 "name": "BaseBdev4", 00:17:10.558 "uuid": "51685deb-f101-5e8e-9921-c6d4b0cdfb47", 00:17:10.558 "is_configured": true, 00:17:10.558 "data_offset": 0, 00:17:10.558 "data_size": 65536 00:17:10.558 } 00:17:10.558 ] 00:17:10.558 }' 00:17:10.558 09:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.558 09:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.558 09:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.558 09:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.558 09:35:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:11.496 09:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:11.496 09:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.496 09:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.496 09:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.496 09:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.496 09:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.496 09:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.496 09:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.496 09:35:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.496 09:35:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.496 [2024-11-15 09:35:59.785070] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:11.496 [2024-11-15 09:35:59.785234] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:11.496 [2024-11-15 09:35:59.785342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.496 09:35:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.496 09:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.496 "name": "raid_bdev1", 00:17:11.496 "uuid": "da3d4394-0fe4-48cd-b977-4b8373d059ca", 00:17:11.496 "strip_size_kb": 64, 00:17:11.496 "state": "online", 00:17:11.496 "raid_level": "raid5f", 00:17:11.496 "superblock": false, 00:17:11.496 "num_base_bdevs": 4, 00:17:11.496 "num_base_bdevs_discovered": 4, 00:17:11.496 "num_base_bdevs_operational": 4, 00:17:11.496 "process": { 00:17:11.496 "type": "rebuild", 00:17:11.496 "target": "spare", 00:17:11.496 "progress": { 00:17:11.496 "blocks": 195840, 00:17:11.496 "percent": 99 00:17:11.496 } 00:17:11.496 }, 00:17:11.496 "base_bdevs_list": [ 00:17:11.496 { 00:17:11.496 "name": "spare", 00:17:11.496 "uuid": "6082e130-07dc-522a-a92e-7057419af9ad", 00:17:11.496 "is_configured": true, 00:17:11.496 "data_offset": 0, 00:17:11.496 "data_size": 65536 00:17:11.496 }, 00:17:11.496 { 00:17:11.496 "name": "BaseBdev2", 00:17:11.496 "uuid": "09672ecd-4c94-51e8-9c45-6471d340565f", 00:17:11.496 "is_configured": true, 00:17:11.496 "data_offset": 0, 00:17:11.496 "data_size": 65536 00:17:11.496 }, 00:17:11.496 { 00:17:11.496 "name": "BaseBdev3", 00:17:11.496 "uuid": "cbad051b-a736-510f-9dbc-f8d0de1844a5", 00:17:11.496 "is_configured": true, 00:17:11.496 "data_offset": 0, 00:17:11.496 "data_size": 65536 00:17:11.496 }, 00:17:11.496 { 00:17:11.496 "name": "BaseBdev4", 00:17:11.496 "uuid": "51685deb-f101-5e8e-9921-c6d4b0cdfb47", 00:17:11.496 "is_configured": true, 00:17:11.496 "data_offset": 0, 00:17:11.496 "data_size": 65536 00:17:11.496 } 00:17:11.496 ] 00:17:11.496 }' 00:17:11.496 09:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.496 09:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.496 09:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.496 09:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.496 09:35:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:12.874 09:36:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:12.874 09:36:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.874 09:36:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.874 09:36:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.874 09:36:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.874 09:36:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.874 09:36:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.874 09:36:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.874 09:36:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.874 09:36:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.874 09:36:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.874 09:36:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.874 "name": "raid_bdev1", 00:17:12.874 "uuid": "da3d4394-0fe4-48cd-b977-4b8373d059ca", 00:17:12.874 "strip_size_kb": 64, 00:17:12.874 "state": "online", 00:17:12.874 "raid_level": "raid5f", 00:17:12.874 "superblock": false, 00:17:12.874 "num_base_bdevs": 4, 00:17:12.874 "num_base_bdevs_discovered": 4, 00:17:12.874 "num_base_bdevs_operational": 4, 00:17:12.874 "base_bdevs_list": [ 00:17:12.874 { 00:17:12.874 "name": "spare", 00:17:12.875 "uuid": "6082e130-07dc-522a-a92e-7057419af9ad", 00:17:12.875 "is_configured": true, 00:17:12.875 "data_offset": 0, 00:17:12.875 "data_size": 65536 00:17:12.875 }, 00:17:12.875 { 00:17:12.875 "name": "BaseBdev2", 00:17:12.875 "uuid": "09672ecd-4c94-51e8-9c45-6471d340565f", 00:17:12.875 "is_configured": true, 00:17:12.875 "data_offset": 0, 00:17:12.875 "data_size": 65536 00:17:12.875 }, 00:17:12.875 { 00:17:12.875 "name": "BaseBdev3", 00:17:12.875 "uuid": "cbad051b-a736-510f-9dbc-f8d0de1844a5", 00:17:12.875 "is_configured": true, 00:17:12.875 "data_offset": 0, 00:17:12.875 "data_size": 65536 00:17:12.875 }, 00:17:12.875 { 00:17:12.875 "name": "BaseBdev4", 00:17:12.875 "uuid": "51685deb-f101-5e8e-9921-c6d4b0cdfb47", 00:17:12.875 "is_configured": true, 00:17:12.875 "data_offset": 0, 00:17:12.875 "data_size": 65536 00:17:12.875 } 00:17:12.875 ] 00:17:12.875 }' 00:17:12.875 09:36:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.875 "name": "raid_bdev1", 00:17:12.875 "uuid": "da3d4394-0fe4-48cd-b977-4b8373d059ca", 00:17:12.875 "strip_size_kb": 64, 00:17:12.875 "state": "online", 00:17:12.875 "raid_level": "raid5f", 00:17:12.875 "superblock": false, 00:17:12.875 "num_base_bdevs": 4, 00:17:12.875 "num_base_bdevs_discovered": 4, 00:17:12.875 "num_base_bdevs_operational": 4, 00:17:12.875 "base_bdevs_list": [ 00:17:12.875 { 00:17:12.875 "name": "spare", 00:17:12.875 "uuid": "6082e130-07dc-522a-a92e-7057419af9ad", 00:17:12.875 "is_configured": true, 00:17:12.875 "data_offset": 0, 00:17:12.875 "data_size": 65536 00:17:12.875 }, 00:17:12.875 { 00:17:12.875 "name": "BaseBdev2", 00:17:12.875 "uuid": "09672ecd-4c94-51e8-9c45-6471d340565f", 00:17:12.875 "is_configured": true, 00:17:12.875 "data_offset": 0, 00:17:12.875 "data_size": 65536 00:17:12.875 }, 00:17:12.875 { 00:17:12.875 "name": "BaseBdev3", 00:17:12.875 "uuid": "cbad051b-a736-510f-9dbc-f8d0de1844a5", 00:17:12.875 "is_configured": true, 00:17:12.875 "data_offset": 0, 00:17:12.875 "data_size": 65536 00:17:12.875 }, 00:17:12.875 { 00:17:12.875 "name": "BaseBdev4", 00:17:12.875 "uuid": "51685deb-f101-5e8e-9921-c6d4b0cdfb47", 00:17:12.875 "is_configured": true, 00:17:12.875 "data_offset": 0, 00:17:12.875 "data_size": 65536 00:17:12.875 } 00:17:12.875 ] 00:17:12.875 }' 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.875 "name": "raid_bdev1", 00:17:12.875 "uuid": "da3d4394-0fe4-48cd-b977-4b8373d059ca", 00:17:12.875 "strip_size_kb": 64, 00:17:12.875 "state": "online", 00:17:12.875 "raid_level": "raid5f", 00:17:12.875 "superblock": false, 00:17:12.875 "num_base_bdevs": 4, 00:17:12.875 "num_base_bdevs_discovered": 4, 00:17:12.875 "num_base_bdevs_operational": 4, 00:17:12.875 "base_bdevs_list": [ 00:17:12.875 { 00:17:12.875 "name": "spare", 00:17:12.875 "uuid": "6082e130-07dc-522a-a92e-7057419af9ad", 00:17:12.875 "is_configured": true, 00:17:12.875 "data_offset": 0, 00:17:12.875 "data_size": 65536 00:17:12.875 }, 00:17:12.875 { 00:17:12.875 "name": "BaseBdev2", 00:17:12.875 "uuid": "09672ecd-4c94-51e8-9c45-6471d340565f", 00:17:12.875 "is_configured": true, 00:17:12.875 "data_offset": 0, 00:17:12.875 "data_size": 65536 00:17:12.875 }, 00:17:12.875 { 00:17:12.875 "name": "BaseBdev3", 00:17:12.875 "uuid": "cbad051b-a736-510f-9dbc-f8d0de1844a5", 00:17:12.875 "is_configured": true, 00:17:12.875 "data_offset": 0, 00:17:12.875 "data_size": 65536 00:17:12.875 }, 00:17:12.875 { 00:17:12.875 "name": "BaseBdev4", 00:17:12.875 "uuid": "51685deb-f101-5e8e-9921-c6d4b0cdfb47", 00:17:12.875 "is_configured": true, 00:17:12.875 "data_offset": 0, 00:17:12.875 "data_size": 65536 00:17:12.875 } 00:17:12.875 ] 00:17:12.875 }' 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.875 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.445 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:13.445 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.445 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.445 [2024-11-15 09:36:01.638686] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:13.445 [2024-11-15 09:36:01.638789] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:13.445 [2024-11-15 09:36:01.638967] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:13.445 [2024-11-15 09:36:01.639113] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:13.445 [2024-11-15 09:36:01.639170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:13.445 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.445 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.445 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:13.445 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.445 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.445 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.445 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:13.445 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:13.445 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:13.445 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:13.445 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:13.445 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:13.445 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:13.445 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:13.445 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:13.445 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:13.445 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:13.445 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:13.445 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:13.445 /dev/nbd0 00:17:13.702 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:13.702 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:13.703 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:13.703 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:17:13.703 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:13.703 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:13.703 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:13.703 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:17:13.703 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:13.703 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:13.703 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:13.703 1+0 records in 00:17:13.703 1+0 records out 00:17:13.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638419 s, 6.4 MB/s 00:17:13.703 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:13.703 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:17:13.703 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:13.703 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:13.703 09:36:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:17:13.703 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:13.703 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:13.703 09:36:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:13.703 /dev/nbd1 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:13.961 1+0 records in 00:17:13.961 1+0 records out 00:17:13.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340183 s, 12.0 MB/s 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:13.961 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:14.220 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:14.220 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:14.220 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:14.220 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:14.220 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:14.220 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:14.220 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:14.220 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:14.220 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:14.220 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:14.480 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:14.480 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:14.480 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:14.480 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:14.480 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:14.480 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:14.480 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:14.480 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:14.480 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:14.480 09:36:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85033 00:17:14.480 09:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 85033 ']' 00:17:14.480 09:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 85033 00:17:14.480 09:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:17:14.480 09:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:14.480 09:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85033 00:17:14.480 09:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:14.480 09:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:14.480 killing process with pid 85033 00:17:14.480 Received shutdown signal, test time was about 60.000000 seconds 00:17:14.480 00:17:14.480 Latency(us) 00:17:14.480 [2024-11-15T09:36:02.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.480 [2024-11-15T09:36:02.943Z] =================================================================================================================== 00:17:14.480 [2024-11-15T09:36:02.943Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:14.480 09:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85033' 00:17:14.480 09:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 85033 00:17:14.480 [2024-11-15 09:36:02.930550] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:14.480 09:36:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 85033 00:17:15.048 [2024-11-15 09:36:03.477382] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:16.452 09:36:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:16.452 ************************************ 00:17:16.452 END TEST raid5f_rebuild_test 00:17:16.452 ************************************ 00:17:16.452 00:17:16.452 real 0m20.589s 00:17:16.452 user 0m24.544s 00:17:16.452 sys 0m2.446s 00:17:16.452 09:36:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:16.452 09:36:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.452 09:36:04 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:17:16.452 09:36:04 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:17:16.452 09:36:04 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:16.452 09:36:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:16.452 ************************************ 00:17:16.452 START TEST raid5f_rebuild_test_sb 00:17:16.452 ************************************ 00:17:16.452 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 true false true 00:17:16.452 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:16.452 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:16.452 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:16.452 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:16.452 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:16.452 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:16.452 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:16.452 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:16.452 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:16.452 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:16.452 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:16.452 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:16.452 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:16.452 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:16.452 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:16.452 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:16.453 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:16.453 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:16.453 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:16.453 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:16.453 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:16.453 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:16.453 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:16.453 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:16.453 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:16.453 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:16.453 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:16.453 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:16.453 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:16.453 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:16.453 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:16.453 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:16.453 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85556 00:17:16.453 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:16.453 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85556 00:17:16.453 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 85556 ']' 00:17:16.453 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.453 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:16.453 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.453 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:16.453 09:36:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.453 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:16.453 Zero copy mechanism will not be used. 00:17:16.453 [2024-11-15 09:36:04.856616] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:17:16.453 [2024-11-15 09:36:04.856770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85556 ] 00:17:16.712 [2024-11-15 09:36:05.022657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.712 [2024-11-15 09:36:05.163997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.972 [2024-11-15 09:36:05.401832] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:16.972 [2024-11-15 09:36:05.401910] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:17.232 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:17.232 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:17:17.232 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:17.232 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:17.232 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.232 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.493 BaseBdev1_malloc 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.493 [2024-11-15 09:36:05.746474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:17.493 [2024-11-15 09:36:05.746549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.493 [2024-11-15 09:36:05.746576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:17.493 [2024-11-15 09:36:05.746590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.493 [2024-11-15 09:36:05.749085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.493 [2024-11-15 09:36:05.749125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:17.493 BaseBdev1 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.493 BaseBdev2_malloc 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.493 [2024-11-15 09:36:05.813141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:17.493 [2024-11-15 09:36:05.813241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.493 [2024-11-15 09:36:05.813280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:17.493 [2024-11-15 09:36:05.813297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.493 [2024-11-15 09:36:05.816113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.493 [2024-11-15 09:36:05.816153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:17.493 BaseBdev2 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.493 BaseBdev3_malloc 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.493 [2024-11-15 09:36:05.888019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:17.493 [2024-11-15 09:36:05.888103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.493 [2024-11-15 09:36:05.888128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:17.493 [2024-11-15 09:36:05.888142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.493 [2024-11-15 09:36:05.890709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.493 [2024-11-15 09:36:05.890746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:17.493 BaseBdev3 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.493 BaseBdev4_malloc 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.493 [2024-11-15 09:36:05.950724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:17.493 [2024-11-15 09:36:05.950780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.493 [2024-11-15 09:36:05.950800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:17.493 [2024-11-15 09:36:05.950812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.493 [2024-11-15 09:36:05.953279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.493 [2024-11-15 09:36:05.953318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:17.493 BaseBdev4 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.493 09:36:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.753 spare_malloc 00:17:17.753 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.753 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:17.753 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.753 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.753 spare_delay 00:17:17.753 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.753 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:17.753 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.753 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.753 [2024-11-15 09:36:06.027045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:17.753 [2024-11-15 09:36:06.027127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.753 [2024-11-15 09:36:06.027150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:17.753 [2024-11-15 09:36:06.027162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.753 [2024-11-15 09:36:06.029765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.753 [2024-11-15 09:36:06.029830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:17.753 spare 00:17:17.753 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.753 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:17.753 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.753 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.753 [2024-11-15 09:36:06.039094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:17.753 [2024-11-15 09:36:06.041369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:17.753 [2024-11-15 09:36:06.041450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:17.753 [2024-11-15 09:36:06.041512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:17.753 [2024-11-15 09:36:06.041743] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:17.753 [2024-11-15 09:36:06.041767] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:17.753 [2024-11-15 09:36:06.042093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:17.753 [2024-11-15 09:36:06.051032] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:17.753 [2024-11-15 09:36:06.051057] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:17.753 [2024-11-15 09:36:06.051281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.753 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.753 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:17.753 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.753 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.754 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:17.754 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:17.754 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:17.754 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.754 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.754 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.754 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.754 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.754 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.754 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.754 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.754 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.754 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.754 "name": "raid_bdev1", 00:17:17.754 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:17.754 "strip_size_kb": 64, 00:17:17.754 "state": "online", 00:17:17.754 "raid_level": "raid5f", 00:17:17.754 "superblock": true, 00:17:17.754 "num_base_bdevs": 4, 00:17:17.754 "num_base_bdevs_discovered": 4, 00:17:17.754 "num_base_bdevs_operational": 4, 00:17:17.754 "base_bdevs_list": [ 00:17:17.754 { 00:17:17.754 "name": "BaseBdev1", 00:17:17.754 "uuid": "32c2cf3f-e2b6-53a5-8134-5c3c2d9b7cce", 00:17:17.754 "is_configured": true, 00:17:17.754 "data_offset": 2048, 00:17:17.754 "data_size": 63488 00:17:17.754 }, 00:17:17.754 { 00:17:17.754 "name": "BaseBdev2", 00:17:17.754 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:17.754 "is_configured": true, 00:17:17.754 "data_offset": 2048, 00:17:17.754 "data_size": 63488 00:17:17.754 }, 00:17:17.754 { 00:17:17.754 "name": "BaseBdev3", 00:17:17.754 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:17.754 "is_configured": true, 00:17:17.754 "data_offset": 2048, 00:17:17.754 "data_size": 63488 00:17:17.754 }, 00:17:17.754 { 00:17:17.754 "name": "BaseBdev4", 00:17:17.754 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:17.754 "is_configured": true, 00:17:17.754 "data_offset": 2048, 00:17:17.754 "data_size": 63488 00:17:17.754 } 00:17:17.754 ] 00:17:17.754 }' 00:17:17.754 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.754 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.323 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:18.323 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:18.323 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.323 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.323 [2024-11-15 09:36:06.553216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:18.323 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.323 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:17:18.323 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.323 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.323 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.323 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:18.323 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.323 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:18.323 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:18.323 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:18.323 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:18.323 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:18.323 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:18.323 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:18.323 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:18.323 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:18.323 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:18.323 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:18.323 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:18.323 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:18.323 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:18.583 [2024-11-15 09:36:06.832568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:18.583 /dev/nbd0 00:17:18.583 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:18.583 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:18.583 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:18.583 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:17:18.583 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:18.583 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:18.583 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:18.583 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:17:18.583 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:18.583 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:18.583 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:18.583 1+0 records in 00:17:18.583 1+0 records out 00:17:18.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402502 s, 10.2 MB/s 00:17:18.583 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.583 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:17:18.583 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:18.584 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:18.584 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:17:18.584 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:18.584 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:18.584 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:18.584 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:18.584 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:18.584 09:36:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:17:19.153 496+0 records in 00:17:19.153 496+0 records out 00:17:19.153 97517568 bytes (98 MB, 93 MiB) copied, 0.50368 s, 194 MB/s 00:17:19.154 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:19.154 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:19.154 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:19.154 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:19.154 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:19.154 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:19.154 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:19.414 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:19.414 [2024-11-15 09:36:07.623224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.414 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:19.414 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:19.414 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:19.414 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:19.414 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:19.414 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:19.414 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:19.414 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:19.414 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.414 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.414 [2024-11-15 09:36:07.646533] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:19.414 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.414 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:19.414 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.414 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.414 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:19.414 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.414 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:19.414 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.414 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.415 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.415 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.415 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.415 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.415 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.415 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.415 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.415 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.415 "name": "raid_bdev1", 00:17:19.415 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:19.415 "strip_size_kb": 64, 00:17:19.415 "state": "online", 00:17:19.415 "raid_level": "raid5f", 00:17:19.415 "superblock": true, 00:17:19.415 "num_base_bdevs": 4, 00:17:19.415 "num_base_bdevs_discovered": 3, 00:17:19.415 "num_base_bdevs_operational": 3, 00:17:19.415 "base_bdevs_list": [ 00:17:19.415 { 00:17:19.415 "name": null, 00:17:19.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.415 "is_configured": false, 00:17:19.415 "data_offset": 0, 00:17:19.415 "data_size": 63488 00:17:19.415 }, 00:17:19.415 { 00:17:19.415 "name": "BaseBdev2", 00:17:19.415 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:19.415 "is_configured": true, 00:17:19.415 "data_offset": 2048, 00:17:19.415 "data_size": 63488 00:17:19.415 }, 00:17:19.415 { 00:17:19.415 "name": "BaseBdev3", 00:17:19.415 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:19.415 "is_configured": true, 00:17:19.415 "data_offset": 2048, 00:17:19.415 "data_size": 63488 00:17:19.415 }, 00:17:19.415 { 00:17:19.415 "name": "BaseBdev4", 00:17:19.415 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:19.415 "is_configured": true, 00:17:19.415 "data_offset": 2048, 00:17:19.415 "data_size": 63488 00:17:19.415 } 00:17:19.415 ] 00:17:19.415 }' 00:17:19.415 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.415 09:36:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.676 09:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:19.676 09:36:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.676 09:36:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.676 [2024-11-15 09:36:08.137725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:19.936 [2024-11-15 09:36:08.154906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:17:19.936 09:36:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.936 09:36:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:19.936 [2024-11-15 09:36:08.164613] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:20.877 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.877 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.877 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.877 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.877 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.877 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.877 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.877 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.877 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.877 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.877 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.877 "name": "raid_bdev1", 00:17:20.877 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:20.877 "strip_size_kb": 64, 00:17:20.877 "state": "online", 00:17:20.877 "raid_level": "raid5f", 00:17:20.877 "superblock": true, 00:17:20.877 "num_base_bdevs": 4, 00:17:20.877 "num_base_bdevs_discovered": 4, 00:17:20.877 "num_base_bdevs_operational": 4, 00:17:20.877 "process": { 00:17:20.877 "type": "rebuild", 00:17:20.877 "target": "spare", 00:17:20.877 "progress": { 00:17:20.877 "blocks": 19200, 00:17:20.877 "percent": 10 00:17:20.878 } 00:17:20.878 }, 00:17:20.878 "base_bdevs_list": [ 00:17:20.878 { 00:17:20.878 "name": "spare", 00:17:20.878 "uuid": "b14229a6-dac0-51b1-aa37-b4935201bfbb", 00:17:20.878 "is_configured": true, 00:17:20.878 "data_offset": 2048, 00:17:20.878 "data_size": 63488 00:17:20.878 }, 00:17:20.878 { 00:17:20.878 "name": "BaseBdev2", 00:17:20.878 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:20.878 "is_configured": true, 00:17:20.878 "data_offset": 2048, 00:17:20.878 "data_size": 63488 00:17:20.878 }, 00:17:20.878 { 00:17:20.878 "name": "BaseBdev3", 00:17:20.878 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:20.878 "is_configured": true, 00:17:20.878 "data_offset": 2048, 00:17:20.878 "data_size": 63488 00:17:20.878 }, 00:17:20.878 { 00:17:20.878 "name": "BaseBdev4", 00:17:20.878 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:20.878 "is_configured": true, 00:17:20.878 "data_offset": 2048, 00:17:20.878 "data_size": 63488 00:17:20.878 } 00:17:20.878 ] 00:17:20.878 }' 00:17:20.878 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.878 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.878 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.878 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.878 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:20.878 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.878 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.878 [2024-11-15 09:36:09.296296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:21.138 [2024-11-15 09:36:09.376386] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:21.138 [2024-11-15 09:36:09.376488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.138 [2024-11-15 09:36:09.376508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:21.138 [2024-11-15 09:36:09.376519] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:21.138 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.138 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:21.138 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.138 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.138 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:21.138 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.138 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:21.138 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.138 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.138 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.138 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.138 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.138 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.138 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.138 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.138 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.138 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.138 "name": "raid_bdev1", 00:17:21.138 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:21.138 "strip_size_kb": 64, 00:17:21.138 "state": "online", 00:17:21.138 "raid_level": "raid5f", 00:17:21.138 "superblock": true, 00:17:21.138 "num_base_bdevs": 4, 00:17:21.138 "num_base_bdevs_discovered": 3, 00:17:21.138 "num_base_bdevs_operational": 3, 00:17:21.138 "base_bdevs_list": [ 00:17:21.138 { 00:17:21.138 "name": null, 00:17:21.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.138 "is_configured": false, 00:17:21.138 "data_offset": 0, 00:17:21.138 "data_size": 63488 00:17:21.138 }, 00:17:21.138 { 00:17:21.138 "name": "BaseBdev2", 00:17:21.138 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:21.138 "is_configured": true, 00:17:21.138 "data_offset": 2048, 00:17:21.138 "data_size": 63488 00:17:21.138 }, 00:17:21.138 { 00:17:21.138 "name": "BaseBdev3", 00:17:21.138 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:21.138 "is_configured": true, 00:17:21.138 "data_offset": 2048, 00:17:21.138 "data_size": 63488 00:17:21.138 }, 00:17:21.138 { 00:17:21.138 "name": "BaseBdev4", 00:17:21.138 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:21.138 "is_configured": true, 00:17:21.138 "data_offset": 2048, 00:17:21.138 "data_size": 63488 00:17:21.138 } 00:17:21.138 ] 00:17:21.138 }' 00:17:21.138 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.138 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.397 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:21.397 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.397 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:21.397 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:21.397 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.397 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.397 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.397 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.397 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.397 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.656 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.656 "name": "raid_bdev1", 00:17:21.656 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:21.656 "strip_size_kb": 64, 00:17:21.656 "state": "online", 00:17:21.656 "raid_level": "raid5f", 00:17:21.656 "superblock": true, 00:17:21.656 "num_base_bdevs": 4, 00:17:21.656 "num_base_bdevs_discovered": 3, 00:17:21.656 "num_base_bdevs_operational": 3, 00:17:21.656 "base_bdevs_list": [ 00:17:21.656 { 00:17:21.657 "name": null, 00:17:21.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.657 "is_configured": false, 00:17:21.657 "data_offset": 0, 00:17:21.657 "data_size": 63488 00:17:21.657 }, 00:17:21.657 { 00:17:21.657 "name": "BaseBdev2", 00:17:21.657 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:21.657 "is_configured": true, 00:17:21.657 "data_offset": 2048, 00:17:21.657 "data_size": 63488 00:17:21.657 }, 00:17:21.657 { 00:17:21.657 "name": "BaseBdev3", 00:17:21.657 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:21.657 "is_configured": true, 00:17:21.657 "data_offset": 2048, 00:17:21.657 "data_size": 63488 00:17:21.657 }, 00:17:21.657 { 00:17:21.657 "name": "BaseBdev4", 00:17:21.657 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:21.657 "is_configured": true, 00:17:21.657 "data_offset": 2048, 00:17:21.657 "data_size": 63488 00:17:21.657 } 00:17:21.657 ] 00:17:21.657 }' 00:17:21.657 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.657 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:21.657 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.657 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:21.657 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:21.657 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.657 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.657 [2024-11-15 09:36:09.956428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:21.657 [2024-11-15 09:36:09.973566] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:17:21.657 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.657 09:36:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:21.657 [2024-11-15 09:36:09.984011] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:22.595 09:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.595 09:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.595 09:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.595 09:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.595 09:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.595 09:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.595 09:36:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.595 09:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.595 09:36:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.595 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.595 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.595 "name": "raid_bdev1", 00:17:22.595 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:22.595 "strip_size_kb": 64, 00:17:22.595 "state": "online", 00:17:22.595 "raid_level": "raid5f", 00:17:22.595 "superblock": true, 00:17:22.595 "num_base_bdevs": 4, 00:17:22.595 "num_base_bdevs_discovered": 4, 00:17:22.595 "num_base_bdevs_operational": 4, 00:17:22.595 "process": { 00:17:22.595 "type": "rebuild", 00:17:22.595 "target": "spare", 00:17:22.595 "progress": { 00:17:22.595 "blocks": 19200, 00:17:22.595 "percent": 10 00:17:22.595 } 00:17:22.595 }, 00:17:22.595 "base_bdevs_list": [ 00:17:22.595 { 00:17:22.595 "name": "spare", 00:17:22.595 "uuid": "b14229a6-dac0-51b1-aa37-b4935201bfbb", 00:17:22.595 "is_configured": true, 00:17:22.595 "data_offset": 2048, 00:17:22.595 "data_size": 63488 00:17:22.595 }, 00:17:22.595 { 00:17:22.595 "name": "BaseBdev2", 00:17:22.595 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:22.595 "is_configured": true, 00:17:22.595 "data_offset": 2048, 00:17:22.595 "data_size": 63488 00:17:22.595 }, 00:17:22.595 { 00:17:22.595 "name": "BaseBdev3", 00:17:22.595 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:22.595 "is_configured": true, 00:17:22.595 "data_offset": 2048, 00:17:22.595 "data_size": 63488 00:17:22.595 }, 00:17:22.595 { 00:17:22.595 "name": "BaseBdev4", 00:17:22.595 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:22.595 "is_configured": true, 00:17:22.595 "data_offset": 2048, 00:17:22.595 "data_size": 63488 00:17:22.595 } 00:17:22.595 ] 00:17:22.596 }' 00:17:22.596 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.857 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:22.857 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.857 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.857 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:22.857 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:22.857 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:22.857 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:22.857 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:22.857 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=665 00:17:22.857 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:22.857 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.857 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.857 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.857 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.857 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.857 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.857 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.857 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.857 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.857 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.857 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.857 "name": "raid_bdev1", 00:17:22.857 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:22.857 "strip_size_kb": 64, 00:17:22.857 "state": "online", 00:17:22.857 "raid_level": "raid5f", 00:17:22.857 "superblock": true, 00:17:22.857 "num_base_bdevs": 4, 00:17:22.857 "num_base_bdevs_discovered": 4, 00:17:22.857 "num_base_bdevs_operational": 4, 00:17:22.857 "process": { 00:17:22.857 "type": "rebuild", 00:17:22.857 "target": "spare", 00:17:22.857 "progress": { 00:17:22.857 "blocks": 21120, 00:17:22.857 "percent": 11 00:17:22.857 } 00:17:22.857 }, 00:17:22.857 "base_bdevs_list": [ 00:17:22.857 { 00:17:22.857 "name": "spare", 00:17:22.857 "uuid": "b14229a6-dac0-51b1-aa37-b4935201bfbb", 00:17:22.857 "is_configured": true, 00:17:22.857 "data_offset": 2048, 00:17:22.857 "data_size": 63488 00:17:22.857 }, 00:17:22.857 { 00:17:22.857 "name": "BaseBdev2", 00:17:22.857 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:22.857 "is_configured": true, 00:17:22.857 "data_offset": 2048, 00:17:22.857 "data_size": 63488 00:17:22.857 }, 00:17:22.857 { 00:17:22.857 "name": "BaseBdev3", 00:17:22.857 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:22.857 "is_configured": true, 00:17:22.857 "data_offset": 2048, 00:17:22.857 "data_size": 63488 00:17:22.857 }, 00:17:22.857 { 00:17:22.857 "name": "BaseBdev4", 00:17:22.857 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:22.857 "is_configured": true, 00:17:22.857 "data_offset": 2048, 00:17:22.857 "data_size": 63488 00:17:22.857 } 00:17:22.857 ] 00:17:22.857 }' 00:17:22.857 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.857 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:22.857 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.857 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:22.857 09:36:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:23.796 09:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:23.796 09:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:23.796 09:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.796 09:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:23.796 09:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:23.796 09:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.796 09:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.797 09:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.797 09:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.797 09:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.056 09:36:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.056 09:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.056 "name": "raid_bdev1", 00:17:24.056 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:24.056 "strip_size_kb": 64, 00:17:24.056 "state": "online", 00:17:24.056 "raid_level": "raid5f", 00:17:24.056 "superblock": true, 00:17:24.056 "num_base_bdevs": 4, 00:17:24.056 "num_base_bdevs_discovered": 4, 00:17:24.056 "num_base_bdevs_operational": 4, 00:17:24.056 "process": { 00:17:24.056 "type": "rebuild", 00:17:24.056 "target": "spare", 00:17:24.056 "progress": { 00:17:24.056 "blocks": 42240, 00:17:24.056 "percent": 22 00:17:24.056 } 00:17:24.056 }, 00:17:24.056 "base_bdevs_list": [ 00:17:24.056 { 00:17:24.056 "name": "spare", 00:17:24.056 "uuid": "b14229a6-dac0-51b1-aa37-b4935201bfbb", 00:17:24.056 "is_configured": true, 00:17:24.056 "data_offset": 2048, 00:17:24.056 "data_size": 63488 00:17:24.056 }, 00:17:24.056 { 00:17:24.056 "name": "BaseBdev2", 00:17:24.056 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:24.056 "is_configured": true, 00:17:24.056 "data_offset": 2048, 00:17:24.056 "data_size": 63488 00:17:24.056 }, 00:17:24.056 { 00:17:24.056 "name": "BaseBdev3", 00:17:24.056 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:24.056 "is_configured": true, 00:17:24.056 "data_offset": 2048, 00:17:24.056 "data_size": 63488 00:17:24.056 }, 00:17:24.056 { 00:17:24.056 "name": "BaseBdev4", 00:17:24.056 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:24.056 "is_configured": true, 00:17:24.056 "data_offset": 2048, 00:17:24.056 "data_size": 63488 00:17:24.056 } 00:17:24.056 ] 00:17:24.056 }' 00:17:24.056 09:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.056 09:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:24.056 09:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.056 09:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:24.056 09:36:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:24.995 09:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:24.995 09:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:24.995 09:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.995 09:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:24.995 09:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:24.995 09:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.995 09:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.995 09:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.995 09:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.995 09:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.995 09:36:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.995 09:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.995 "name": "raid_bdev1", 00:17:24.995 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:24.995 "strip_size_kb": 64, 00:17:24.995 "state": "online", 00:17:24.995 "raid_level": "raid5f", 00:17:24.995 "superblock": true, 00:17:24.995 "num_base_bdevs": 4, 00:17:24.995 "num_base_bdevs_discovered": 4, 00:17:24.995 "num_base_bdevs_operational": 4, 00:17:24.995 "process": { 00:17:24.995 "type": "rebuild", 00:17:24.995 "target": "spare", 00:17:24.995 "progress": { 00:17:24.995 "blocks": 63360, 00:17:24.995 "percent": 33 00:17:24.995 } 00:17:24.995 }, 00:17:24.995 "base_bdevs_list": [ 00:17:24.995 { 00:17:24.995 "name": "spare", 00:17:24.995 "uuid": "b14229a6-dac0-51b1-aa37-b4935201bfbb", 00:17:24.995 "is_configured": true, 00:17:24.995 "data_offset": 2048, 00:17:24.995 "data_size": 63488 00:17:24.995 }, 00:17:24.995 { 00:17:24.995 "name": "BaseBdev2", 00:17:24.995 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:24.995 "is_configured": true, 00:17:24.995 "data_offset": 2048, 00:17:24.995 "data_size": 63488 00:17:24.995 }, 00:17:24.995 { 00:17:24.995 "name": "BaseBdev3", 00:17:24.995 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:24.995 "is_configured": true, 00:17:24.995 "data_offset": 2048, 00:17:24.995 "data_size": 63488 00:17:24.995 }, 00:17:24.995 { 00:17:24.995 "name": "BaseBdev4", 00:17:24.995 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:24.995 "is_configured": true, 00:17:24.995 "data_offset": 2048, 00:17:24.995 "data_size": 63488 00:17:24.995 } 00:17:24.995 ] 00:17:24.995 }' 00:17:24.995 09:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.254 09:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:25.254 09:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.254 09:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:25.254 09:36:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:26.197 09:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:26.197 09:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:26.197 09:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.197 09:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:26.197 09:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:26.197 09:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.197 09:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.197 09:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.197 09:36:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.197 09:36:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.197 09:36:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.197 09:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.197 "name": "raid_bdev1", 00:17:26.197 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:26.197 "strip_size_kb": 64, 00:17:26.197 "state": "online", 00:17:26.197 "raid_level": "raid5f", 00:17:26.197 "superblock": true, 00:17:26.197 "num_base_bdevs": 4, 00:17:26.197 "num_base_bdevs_discovered": 4, 00:17:26.197 "num_base_bdevs_operational": 4, 00:17:26.197 "process": { 00:17:26.197 "type": "rebuild", 00:17:26.197 "target": "spare", 00:17:26.197 "progress": { 00:17:26.197 "blocks": 86400, 00:17:26.197 "percent": 45 00:17:26.197 } 00:17:26.197 }, 00:17:26.197 "base_bdevs_list": [ 00:17:26.197 { 00:17:26.197 "name": "spare", 00:17:26.197 "uuid": "b14229a6-dac0-51b1-aa37-b4935201bfbb", 00:17:26.197 "is_configured": true, 00:17:26.197 "data_offset": 2048, 00:17:26.197 "data_size": 63488 00:17:26.197 }, 00:17:26.197 { 00:17:26.197 "name": "BaseBdev2", 00:17:26.197 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:26.197 "is_configured": true, 00:17:26.197 "data_offset": 2048, 00:17:26.197 "data_size": 63488 00:17:26.197 }, 00:17:26.197 { 00:17:26.197 "name": "BaseBdev3", 00:17:26.197 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:26.197 "is_configured": true, 00:17:26.197 "data_offset": 2048, 00:17:26.197 "data_size": 63488 00:17:26.197 }, 00:17:26.197 { 00:17:26.197 "name": "BaseBdev4", 00:17:26.197 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:26.197 "is_configured": true, 00:17:26.197 "data_offset": 2048, 00:17:26.197 "data_size": 63488 00:17:26.197 } 00:17:26.197 ] 00:17:26.197 }' 00:17:26.197 09:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.197 09:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.197 09:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.468 09:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.468 09:36:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:27.406 09:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:27.406 09:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.406 09:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.406 09:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.406 09:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.406 09:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.406 09:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.406 09:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.406 09:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.406 09:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.406 09:36:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.406 09:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.406 "name": "raid_bdev1", 00:17:27.406 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:27.406 "strip_size_kb": 64, 00:17:27.406 "state": "online", 00:17:27.406 "raid_level": "raid5f", 00:17:27.406 "superblock": true, 00:17:27.406 "num_base_bdevs": 4, 00:17:27.406 "num_base_bdevs_discovered": 4, 00:17:27.406 "num_base_bdevs_operational": 4, 00:17:27.406 "process": { 00:17:27.406 "type": "rebuild", 00:17:27.406 "target": "spare", 00:17:27.406 "progress": { 00:17:27.406 "blocks": 107520, 00:17:27.406 "percent": 56 00:17:27.406 } 00:17:27.406 }, 00:17:27.406 "base_bdevs_list": [ 00:17:27.406 { 00:17:27.406 "name": "spare", 00:17:27.406 "uuid": "b14229a6-dac0-51b1-aa37-b4935201bfbb", 00:17:27.406 "is_configured": true, 00:17:27.406 "data_offset": 2048, 00:17:27.406 "data_size": 63488 00:17:27.406 }, 00:17:27.406 { 00:17:27.406 "name": "BaseBdev2", 00:17:27.406 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:27.406 "is_configured": true, 00:17:27.406 "data_offset": 2048, 00:17:27.406 "data_size": 63488 00:17:27.406 }, 00:17:27.406 { 00:17:27.406 "name": "BaseBdev3", 00:17:27.406 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:27.406 "is_configured": true, 00:17:27.406 "data_offset": 2048, 00:17:27.406 "data_size": 63488 00:17:27.406 }, 00:17:27.406 { 00:17:27.406 "name": "BaseBdev4", 00:17:27.406 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:27.406 "is_configured": true, 00:17:27.406 "data_offset": 2048, 00:17:27.406 "data_size": 63488 00:17:27.406 } 00:17:27.406 ] 00:17:27.406 }' 00:17:27.406 09:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.406 09:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.406 09:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.406 09:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.406 09:36:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:28.787 09:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:28.787 09:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.787 09:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.787 09:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.787 09:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.787 09:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.787 09:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.787 09:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.787 09:36:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.787 09:36:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.787 09:36:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.787 09:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.787 "name": "raid_bdev1", 00:17:28.787 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:28.787 "strip_size_kb": 64, 00:17:28.787 "state": "online", 00:17:28.787 "raid_level": "raid5f", 00:17:28.787 "superblock": true, 00:17:28.787 "num_base_bdevs": 4, 00:17:28.787 "num_base_bdevs_discovered": 4, 00:17:28.787 "num_base_bdevs_operational": 4, 00:17:28.787 "process": { 00:17:28.787 "type": "rebuild", 00:17:28.787 "target": "spare", 00:17:28.787 "progress": { 00:17:28.787 "blocks": 128640, 00:17:28.787 "percent": 67 00:17:28.787 } 00:17:28.787 }, 00:17:28.787 "base_bdevs_list": [ 00:17:28.787 { 00:17:28.787 "name": "spare", 00:17:28.787 "uuid": "b14229a6-dac0-51b1-aa37-b4935201bfbb", 00:17:28.787 "is_configured": true, 00:17:28.787 "data_offset": 2048, 00:17:28.787 "data_size": 63488 00:17:28.787 }, 00:17:28.787 { 00:17:28.787 "name": "BaseBdev2", 00:17:28.787 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:28.787 "is_configured": true, 00:17:28.787 "data_offset": 2048, 00:17:28.787 "data_size": 63488 00:17:28.787 }, 00:17:28.787 { 00:17:28.787 "name": "BaseBdev3", 00:17:28.787 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:28.787 "is_configured": true, 00:17:28.787 "data_offset": 2048, 00:17:28.787 "data_size": 63488 00:17:28.787 }, 00:17:28.787 { 00:17:28.787 "name": "BaseBdev4", 00:17:28.787 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:28.787 "is_configured": true, 00:17:28.787 "data_offset": 2048, 00:17:28.787 "data_size": 63488 00:17:28.787 } 00:17:28.787 ] 00:17:28.787 }' 00:17:28.787 09:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.787 09:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.787 09:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.787 09:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.787 09:36:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:29.725 09:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:29.725 09:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:29.725 09:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.725 09:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:29.725 09:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:29.725 09:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.725 09:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.725 09:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.725 09:36:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.725 09:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.725 09:36:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.725 09:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.725 "name": "raid_bdev1", 00:17:29.725 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:29.725 "strip_size_kb": 64, 00:17:29.725 "state": "online", 00:17:29.725 "raid_level": "raid5f", 00:17:29.725 "superblock": true, 00:17:29.725 "num_base_bdevs": 4, 00:17:29.725 "num_base_bdevs_discovered": 4, 00:17:29.725 "num_base_bdevs_operational": 4, 00:17:29.725 "process": { 00:17:29.725 "type": "rebuild", 00:17:29.725 "target": "spare", 00:17:29.725 "progress": { 00:17:29.725 "blocks": 151680, 00:17:29.725 "percent": 79 00:17:29.725 } 00:17:29.725 }, 00:17:29.725 "base_bdevs_list": [ 00:17:29.725 { 00:17:29.725 "name": "spare", 00:17:29.725 "uuid": "b14229a6-dac0-51b1-aa37-b4935201bfbb", 00:17:29.725 "is_configured": true, 00:17:29.725 "data_offset": 2048, 00:17:29.725 "data_size": 63488 00:17:29.725 }, 00:17:29.725 { 00:17:29.725 "name": "BaseBdev2", 00:17:29.725 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:29.725 "is_configured": true, 00:17:29.725 "data_offset": 2048, 00:17:29.725 "data_size": 63488 00:17:29.725 }, 00:17:29.725 { 00:17:29.725 "name": "BaseBdev3", 00:17:29.725 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:29.725 "is_configured": true, 00:17:29.725 "data_offset": 2048, 00:17:29.725 "data_size": 63488 00:17:29.725 }, 00:17:29.725 { 00:17:29.725 "name": "BaseBdev4", 00:17:29.725 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:29.725 "is_configured": true, 00:17:29.725 "data_offset": 2048, 00:17:29.725 "data_size": 63488 00:17:29.725 } 00:17:29.725 ] 00:17:29.725 }' 00:17:29.725 09:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.726 09:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:29.726 09:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.726 09:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:29.726 09:36:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:30.664 09:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:30.664 09:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.664 09:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.664 09:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.664 09:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.664 09:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.664 09:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.664 09:36:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.664 09:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.664 09:36:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.923 09:36:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.923 09:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.923 "name": "raid_bdev1", 00:17:30.923 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:30.923 "strip_size_kb": 64, 00:17:30.923 "state": "online", 00:17:30.923 "raid_level": "raid5f", 00:17:30.923 "superblock": true, 00:17:30.923 "num_base_bdevs": 4, 00:17:30.923 "num_base_bdevs_discovered": 4, 00:17:30.923 "num_base_bdevs_operational": 4, 00:17:30.923 "process": { 00:17:30.923 "type": "rebuild", 00:17:30.923 "target": "spare", 00:17:30.923 "progress": { 00:17:30.923 "blocks": 172800, 00:17:30.923 "percent": 90 00:17:30.924 } 00:17:30.924 }, 00:17:30.924 "base_bdevs_list": [ 00:17:30.924 { 00:17:30.924 "name": "spare", 00:17:30.924 "uuid": "b14229a6-dac0-51b1-aa37-b4935201bfbb", 00:17:30.924 "is_configured": true, 00:17:30.924 "data_offset": 2048, 00:17:30.924 "data_size": 63488 00:17:30.924 }, 00:17:30.924 { 00:17:30.924 "name": "BaseBdev2", 00:17:30.924 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:30.924 "is_configured": true, 00:17:30.924 "data_offset": 2048, 00:17:30.924 "data_size": 63488 00:17:30.924 }, 00:17:30.924 { 00:17:30.924 "name": "BaseBdev3", 00:17:30.924 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:30.924 "is_configured": true, 00:17:30.924 "data_offset": 2048, 00:17:30.924 "data_size": 63488 00:17:30.924 }, 00:17:30.924 { 00:17:30.924 "name": "BaseBdev4", 00:17:30.924 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:30.924 "is_configured": true, 00:17:30.924 "data_offset": 2048, 00:17:30.924 "data_size": 63488 00:17:30.924 } 00:17:30.924 ] 00:17:30.924 }' 00:17:30.924 09:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.924 09:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.924 09:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.924 09:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.924 09:36:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:31.916 [2024-11-15 09:36:20.075517] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:31.916 [2024-11-15 09:36:20.075613] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:31.916 [2024-11-15 09:36:20.075772] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.916 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:31.916 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:31.916 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.916 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:31.916 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:31.916 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.916 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.916 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.916 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.916 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.916 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.916 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.916 "name": "raid_bdev1", 00:17:31.916 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:31.916 "strip_size_kb": 64, 00:17:31.916 "state": "online", 00:17:31.916 "raid_level": "raid5f", 00:17:31.916 "superblock": true, 00:17:31.916 "num_base_bdevs": 4, 00:17:31.916 "num_base_bdevs_discovered": 4, 00:17:31.916 "num_base_bdevs_operational": 4, 00:17:31.916 "base_bdevs_list": [ 00:17:31.916 { 00:17:31.916 "name": "spare", 00:17:31.916 "uuid": "b14229a6-dac0-51b1-aa37-b4935201bfbb", 00:17:31.916 "is_configured": true, 00:17:31.916 "data_offset": 2048, 00:17:31.916 "data_size": 63488 00:17:31.916 }, 00:17:31.916 { 00:17:31.916 "name": "BaseBdev2", 00:17:31.916 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:31.916 "is_configured": true, 00:17:31.916 "data_offset": 2048, 00:17:31.916 "data_size": 63488 00:17:31.916 }, 00:17:31.916 { 00:17:31.916 "name": "BaseBdev3", 00:17:31.916 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:31.916 "is_configured": true, 00:17:31.916 "data_offset": 2048, 00:17:31.916 "data_size": 63488 00:17:31.916 }, 00:17:31.916 { 00:17:31.916 "name": "BaseBdev4", 00:17:31.916 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:31.916 "is_configured": true, 00:17:31.916 "data_offset": 2048, 00:17:31.916 "data_size": 63488 00:17:31.916 } 00:17:31.916 ] 00:17:31.916 }' 00:17:31.917 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.917 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:31.917 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.176 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:32.176 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:32.176 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:32.176 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.176 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:32.176 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:32.176 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.176 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.176 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.176 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.176 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.177 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.177 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.177 "name": "raid_bdev1", 00:17:32.177 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:32.177 "strip_size_kb": 64, 00:17:32.177 "state": "online", 00:17:32.177 "raid_level": "raid5f", 00:17:32.177 "superblock": true, 00:17:32.177 "num_base_bdevs": 4, 00:17:32.177 "num_base_bdevs_discovered": 4, 00:17:32.177 "num_base_bdevs_operational": 4, 00:17:32.177 "base_bdevs_list": [ 00:17:32.177 { 00:17:32.177 "name": "spare", 00:17:32.177 "uuid": "b14229a6-dac0-51b1-aa37-b4935201bfbb", 00:17:32.177 "is_configured": true, 00:17:32.177 "data_offset": 2048, 00:17:32.177 "data_size": 63488 00:17:32.177 }, 00:17:32.177 { 00:17:32.177 "name": "BaseBdev2", 00:17:32.177 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:32.177 "is_configured": true, 00:17:32.177 "data_offset": 2048, 00:17:32.177 "data_size": 63488 00:17:32.177 }, 00:17:32.177 { 00:17:32.177 "name": "BaseBdev3", 00:17:32.177 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:32.177 "is_configured": true, 00:17:32.177 "data_offset": 2048, 00:17:32.177 "data_size": 63488 00:17:32.177 }, 00:17:32.177 { 00:17:32.177 "name": "BaseBdev4", 00:17:32.177 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:32.177 "is_configured": true, 00:17:32.177 "data_offset": 2048, 00:17:32.177 "data_size": 63488 00:17:32.177 } 00:17:32.177 ] 00:17:32.177 }' 00:17:32.177 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.177 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:32.177 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.177 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:32.177 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:32.177 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.177 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.177 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:32.177 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.177 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:32.177 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.177 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.177 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.177 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.177 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.177 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.177 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.177 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.177 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.177 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.177 "name": "raid_bdev1", 00:17:32.177 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:32.177 "strip_size_kb": 64, 00:17:32.177 "state": "online", 00:17:32.177 "raid_level": "raid5f", 00:17:32.177 "superblock": true, 00:17:32.177 "num_base_bdevs": 4, 00:17:32.177 "num_base_bdevs_discovered": 4, 00:17:32.177 "num_base_bdevs_operational": 4, 00:17:32.177 "base_bdevs_list": [ 00:17:32.177 { 00:17:32.177 "name": "spare", 00:17:32.177 "uuid": "b14229a6-dac0-51b1-aa37-b4935201bfbb", 00:17:32.177 "is_configured": true, 00:17:32.177 "data_offset": 2048, 00:17:32.177 "data_size": 63488 00:17:32.177 }, 00:17:32.177 { 00:17:32.177 "name": "BaseBdev2", 00:17:32.177 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:32.177 "is_configured": true, 00:17:32.177 "data_offset": 2048, 00:17:32.177 "data_size": 63488 00:17:32.177 }, 00:17:32.177 { 00:17:32.177 "name": "BaseBdev3", 00:17:32.177 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:32.177 "is_configured": true, 00:17:32.177 "data_offset": 2048, 00:17:32.177 "data_size": 63488 00:17:32.177 }, 00:17:32.177 { 00:17:32.177 "name": "BaseBdev4", 00:17:32.177 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:32.177 "is_configured": true, 00:17:32.177 "data_offset": 2048, 00:17:32.177 "data_size": 63488 00:17:32.177 } 00:17:32.177 ] 00:17:32.177 }' 00:17:32.177 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.177 09:36:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.751 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:32.751 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.751 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.751 [2024-11-15 09:36:21.070075] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:32.751 [2024-11-15 09:36:21.070121] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.751 [2024-11-15 09:36:21.070237] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.751 [2024-11-15 09:36:21.070359] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.751 [2024-11-15 09:36:21.070392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:32.751 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.751 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.751 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:32.751 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.751 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.751 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.751 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:32.751 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:32.751 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:32.751 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:32.751 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:32.751 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:32.751 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:32.751 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:32.751 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:32.751 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:32.751 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:32.751 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:32.751 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:33.011 /dev/nbd0 00:17:33.011 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:33.011 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:33.011 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:33.011 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:17:33.011 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:33.011 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:33.011 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:33.011 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:17:33.011 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:33.011 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:33.011 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:33.011 1+0 records in 00:17:33.011 1+0 records out 00:17:33.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387615 s, 10.6 MB/s 00:17:33.011 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:33.011 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:17:33.011 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:33.011 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:33.011 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:17:33.011 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:33.011 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:33.011 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:33.271 /dev/nbd1 00:17:33.271 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:33.271 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:33.271 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:33.271 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:17:33.271 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:33.271 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:33.271 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:33.271 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:17:33.271 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:33.271 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:33.271 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:33.271 1+0 records in 00:17:33.271 1+0 records out 00:17:33.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386956 s, 10.6 MB/s 00:17:33.271 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:33.271 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:17:33.271 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:33.271 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:33.271 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:17:33.271 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:33.271 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:33.271 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:33.530 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:33.530 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:33.530 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:33.530 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:33.530 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:33.530 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:33.530 09:36:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:33.789 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:33.789 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:33.789 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:33.789 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:33.789 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:33.789 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:33.789 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:33.789 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:33.789 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:33.789 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.048 [2024-11-15 09:36:22.320271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:34.048 [2024-11-15 09:36:22.320362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.048 [2024-11-15 09:36:22.320395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:34.048 [2024-11-15 09:36:22.320408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.048 [2024-11-15 09:36:22.323433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.048 [2024-11-15 09:36:22.323476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:34.048 [2024-11-15 09:36:22.323593] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:34.048 [2024-11-15 09:36:22.323671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:34.048 [2024-11-15 09:36:22.323888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:34.048 [2024-11-15 09:36:22.324021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:34.048 [2024-11-15 09:36:22.324148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:34.048 spare 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.048 [2024-11-15 09:36:22.424106] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:34.048 [2024-11-15 09:36:22.424194] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:34.048 [2024-11-15 09:36:22.424628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:34.048 [2024-11-15 09:36:22.431906] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:34.048 [2024-11-15 09:36:22.431952] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:34.048 [2024-11-15 09:36:22.432234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.048 "name": "raid_bdev1", 00:17:34.048 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:34.048 "strip_size_kb": 64, 00:17:34.048 "state": "online", 00:17:34.048 "raid_level": "raid5f", 00:17:34.048 "superblock": true, 00:17:34.048 "num_base_bdevs": 4, 00:17:34.048 "num_base_bdevs_discovered": 4, 00:17:34.048 "num_base_bdevs_operational": 4, 00:17:34.048 "base_bdevs_list": [ 00:17:34.048 { 00:17:34.048 "name": "spare", 00:17:34.048 "uuid": "b14229a6-dac0-51b1-aa37-b4935201bfbb", 00:17:34.048 "is_configured": true, 00:17:34.048 "data_offset": 2048, 00:17:34.048 "data_size": 63488 00:17:34.048 }, 00:17:34.048 { 00:17:34.048 "name": "BaseBdev2", 00:17:34.048 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:34.048 "is_configured": true, 00:17:34.048 "data_offset": 2048, 00:17:34.048 "data_size": 63488 00:17:34.048 }, 00:17:34.048 { 00:17:34.048 "name": "BaseBdev3", 00:17:34.048 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:34.048 "is_configured": true, 00:17:34.048 "data_offset": 2048, 00:17:34.048 "data_size": 63488 00:17:34.048 }, 00:17:34.048 { 00:17:34.048 "name": "BaseBdev4", 00:17:34.048 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:34.048 "is_configured": true, 00:17:34.048 "data_offset": 2048, 00:17:34.048 "data_size": 63488 00:17:34.048 } 00:17:34.048 ] 00:17:34.048 }' 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.048 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.616 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:34.616 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.616 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:34.616 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:34.616 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.616 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.616 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.616 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.616 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.616 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.616 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.616 "name": "raid_bdev1", 00:17:34.616 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:34.616 "strip_size_kb": 64, 00:17:34.616 "state": "online", 00:17:34.616 "raid_level": "raid5f", 00:17:34.616 "superblock": true, 00:17:34.616 "num_base_bdevs": 4, 00:17:34.616 "num_base_bdevs_discovered": 4, 00:17:34.616 "num_base_bdevs_operational": 4, 00:17:34.616 "base_bdevs_list": [ 00:17:34.616 { 00:17:34.616 "name": "spare", 00:17:34.616 "uuid": "b14229a6-dac0-51b1-aa37-b4935201bfbb", 00:17:34.616 "is_configured": true, 00:17:34.616 "data_offset": 2048, 00:17:34.616 "data_size": 63488 00:17:34.616 }, 00:17:34.616 { 00:17:34.616 "name": "BaseBdev2", 00:17:34.616 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:34.616 "is_configured": true, 00:17:34.616 "data_offset": 2048, 00:17:34.616 "data_size": 63488 00:17:34.616 }, 00:17:34.616 { 00:17:34.616 "name": "BaseBdev3", 00:17:34.616 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:34.616 "is_configured": true, 00:17:34.616 "data_offset": 2048, 00:17:34.616 "data_size": 63488 00:17:34.616 }, 00:17:34.616 { 00:17:34.616 "name": "BaseBdev4", 00:17:34.616 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:34.616 "is_configured": true, 00:17:34.616 "data_offset": 2048, 00:17:34.616 "data_size": 63488 00:17:34.616 } 00:17:34.616 ] 00:17:34.616 }' 00:17:34.616 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.616 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:34.616 09:36:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.616 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:34.616 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.616 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:34.616 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.616 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.616 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.875 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.875 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:34.875 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.876 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.876 [2024-11-15 09:36:23.097142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:34.876 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.876 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:34.876 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.876 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.876 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:34.876 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.876 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:34.876 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.876 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.876 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.876 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.876 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.876 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.876 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.876 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.876 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.876 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.876 "name": "raid_bdev1", 00:17:34.876 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:34.876 "strip_size_kb": 64, 00:17:34.876 "state": "online", 00:17:34.876 "raid_level": "raid5f", 00:17:34.876 "superblock": true, 00:17:34.876 "num_base_bdevs": 4, 00:17:34.876 "num_base_bdevs_discovered": 3, 00:17:34.876 "num_base_bdevs_operational": 3, 00:17:34.876 "base_bdevs_list": [ 00:17:34.876 { 00:17:34.876 "name": null, 00:17:34.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.876 "is_configured": false, 00:17:34.876 "data_offset": 0, 00:17:34.876 "data_size": 63488 00:17:34.876 }, 00:17:34.876 { 00:17:34.876 "name": "BaseBdev2", 00:17:34.876 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:34.876 "is_configured": true, 00:17:34.876 "data_offset": 2048, 00:17:34.876 "data_size": 63488 00:17:34.876 }, 00:17:34.876 { 00:17:34.876 "name": "BaseBdev3", 00:17:34.876 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:34.876 "is_configured": true, 00:17:34.876 "data_offset": 2048, 00:17:34.876 "data_size": 63488 00:17:34.876 }, 00:17:34.876 { 00:17:34.876 "name": "BaseBdev4", 00:17:34.876 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:34.876 "is_configured": true, 00:17:34.876 "data_offset": 2048, 00:17:34.876 "data_size": 63488 00:17:34.876 } 00:17:34.876 ] 00:17:34.876 }' 00:17:34.876 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.876 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.136 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:35.136 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.136 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.136 [2024-11-15 09:36:23.556471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:35.136 [2024-11-15 09:36:23.556724] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:35.136 [2024-11-15 09:36:23.556752] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:35.136 [2024-11-15 09:36:23.556808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:35.136 [2024-11-15 09:36:23.572173] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:35.136 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.136 09:36:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:35.136 [2024-11-15 09:36:23.581991] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.518 "name": "raid_bdev1", 00:17:36.518 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:36.518 "strip_size_kb": 64, 00:17:36.518 "state": "online", 00:17:36.518 "raid_level": "raid5f", 00:17:36.518 "superblock": true, 00:17:36.518 "num_base_bdevs": 4, 00:17:36.518 "num_base_bdevs_discovered": 4, 00:17:36.518 "num_base_bdevs_operational": 4, 00:17:36.518 "process": { 00:17:36.518 "type": "rebuild", 00:17:36.518 "target": "spare", 00:17:36.518 "progress": { 00:17:36.518 "blocks": 19200, 00:17:36.518 "percent": 10 00:17:36.518 } 00:17:36.518 }, 00:17:36.518 "base_bdevs_list": [ 00:17:36.518 { 00:17:36.518 "name": "spare", 00:17:36.518 "uuid": "b14229a6-dac0-51b1-aa37-b4935201bfbb", 00:17:36.518 "is_configured": true, 00:17:36.518 "data_offset": 2048, 00:17:36.518 "data_size": 63488 00:17:36.518 }, 00:17:36.518 { 00:17:36.518 "name": "BaseBdev2", 00:17:36.518 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:36.518 "is_configured": true, 00:17:36.518 "data_offset": 2048, 00:17:36.518 "data_size": 63488 00:17:36.518 }, 00:17:36.518 { 00:17:36.518 "name": "BaseBdev3", 00:17:36.518 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:36.518 "is_configured": true, 00:17:36.518 "data_offset": 2048, 00:17:36.518 "data_size": 63488 00:17:36.518 }, 00:17:36.518 { 00:17:36.518 "name": "BaseBdev4", 00:17:36.518 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:36.518 "is_configured": true, 00:17:36.518 "data_offset": 2048, 00:17:36.518 "data_size": 63488 00:17:36.518 } 00:17:36.518 ] 00:17:36.518 }' 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.518 [2024-11-15 09:36:24.737575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:36.518 [2024-11-15 09:36:24.793610] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:36.518 [2024-11-15 09:36:24.793705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.518 [2024-11-15 09:36:24.793726] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:36.518 [2024-11-15 09:36:24.793738] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.518 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.518 "name": "raid_bdev1", 00:17:36.518 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:36.518 "strip_size_kb": 64, 00:17:36.518 "state": "online", 00:17:36.518 "raid_level": "raid5f", 00:17:36.518 "superblock": true, 00:17:36.518 "num_base_bdevs": 4, 00:17:36.518 "num_base_bdevs_discovered": 3, 00:17:36.518 "num_base_bdevs_operational": 3, 00:17:36.518 "base_bdevs_list": [ 00:17:36.518 { 00:17:36.518 "name": null, 00:17:36.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.518 "is_configured": false, 00:17:36.518 "data_offset": 0, 00:17:36.518 "data_size": 63488 00:17:36.519 }, 00:17:36.519 { 00:17:36.519 "name": "BaseBdev2", 00:17:36.519 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:36.519 "is_configured": true, 00:17:36.519 "data_offset": 2048, 00:17:36.519 "data_size": 63488 00:17:36.519 }, 00:17:36.519 { 00:17:36.519 "name": "BaseBdev3", 00:17:36.519 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:36.519 "is_configured": true, 00:17:36.519 "data_offset": 2048, 00:17:36.519 "data_size": 63488 00:17:36.519 }, 00:17:36.519 { 00:17:36.519 "name": "BaseBdev4", 00:17:36.519 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:36.519 "is_configured": true, 00:17:36.519 "data_offset": 2048, 00:17:36.519 "data_size": 63488 00:17:36.519 } 00:17:36.519 ] 00:17:36.519 }' 00:17:36.519 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.519 09:36:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.088 09:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:37.089 09:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.089 09:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.089 [2024-11-15 09:36:25.313205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:37.089 [2024-11-15 09:36:25.313301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.089 [2024-11-15 09:36:25.313336] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:37.089 [2024-11-15 09:36:25.313350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.089 [2024-11-15 09:36:25.313990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.089 [2024-11-15 09:36:25.314023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:37.089 [2024-11-15 09:36:25.314148] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:37.089 [2024-11-15 09:36:25.314167] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:37.089 [2024-11-15 09:36:25.314180] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:37.089 [2024-11-15 09:36:25.314213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:37.089 [2024-11-15 09:36:25.330539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:37.089 spare 00:17:37.089 09:36:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.089 09:36:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:37.089 [2024-11-15 09:36:25.342181] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:38.029 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.029 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.029 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.029 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.029 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.029 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.029 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.029 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.029 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.029 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.029 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.029 "name": "raid_bdev1", 00:17:38.029 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:38.029 "strip_size_kb": 64, 00:17:38.029 "state": "online", 00:17:38.029 "raid_level": "raid5f", 00:17:38.029 "superblock": true, 00:17:38.029 "num_base_bdevs": 4, 00:17:38.029 "num_base_bdevs_discovered": 4, 00:17:38.029 "num_base_bdevs_operational": 4, 00:17:38.029 "process": { 00:17:38.029 "type": "rebuild", 00:17:38.029 "target": "spare", 00:17:38.029 "progress": { 00:17:38.029 "blocks": 19200, 00:17:38.029 "percent": 10 00:17:38.029 } 00:17:38.029 }, 00:17:38.029 "base_bdevs_list": [ 00:17:38.029 { 00:17:38.029 "name": "spare", 00:17:38.029 "uuid": "b14229a6-dac0-51b1-aa37-b4935201bfbb", 00:17:38.029 "is_configured": true, 00:17:38.029 "data_offset": 2048, 00:17:38.029 "data_size": 63488 00:17:38.029 }, 00:17:38.029 { 00:17:38.029 "name": "BaseBdev2", 00:17:38.029 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:38.029 "is_configured": true, 00:17:38.029 "data_offset": 2048, 00:17:38.029 "data_size": 63488 00:17:38.029 }, 00:17:38.029 { 00:17:38.029 "name": "BaseBdev3", 00:17:38.029 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:38.029 "is_configured": true, 00:17:38.029 "data_offset": 2048, 00:17:38.029 "data_size": 63488 00:17:38.029 }, 00:17:38.029 { 00:17:38.029 "name": "BaseBdev4", 00:17:38.029 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:38.029 "is_configured": true, 00:17:38.029 "data_offset": 2048, 00:17:38.029 "data_size": 63488 00:17:38.029 } 00:17:38.029 ] 00:17:38.029 }' 00:17:38.029 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.029 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.029 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.029 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.029 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:38.029 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.029 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.300 [2024-11-15 09:36:26.498210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:38.300 [2024-11-15 09:36:26.554209] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:38.300 [2024-11-15 09:36:26.554284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.300 [2024-11-15 09:36:26.554323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:38.300 [2024-11-15 09:36:26.554331] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:38.300 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.300 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:38.300 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.300 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.300 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.300 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.300 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:38.300 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.300 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.300 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.300 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.300 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.300 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.300 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.300 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.300 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.300 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.300 "name": "raid_bdev1", 00:17:38.300 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:38.300 "strip_size_kb": 64, 00:17:38.300 "state": "online", 00:17:38.300 "raid_level": "raid5f", 00:17:38.300 "superblock": true, 00:17:38.300 "num_base_bdevs": 4, 00:17:38.300 "num_base_bdevs_discovered": 3, 00:17:38.300 "num_base_bdevs_operational": 3, 00:17:38.300 "base_bdevs_list": [ 00:17:38.300 { 00:17:38.300 "name": null, 00:17:38.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.300 "is_configured": false, 00:17:38.300 "data_offset": 0, 00:17:38.300 "data_size": 63488 00:17:38.300 }, 00:17:38.300 { 00:17:38.300 "name": "BaseBdev2", 00:17:38.300 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:38.300 "is_configured": true, 00:17:38.300 "data_offset": 2048, 00:17:38.300 "data_size": 63488 00:17:38.300 }, 00:17:38.300 { 00:17:38.300 "name": "BaseBdev3", 00:17:38.300 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:38.300 "is_configured": true, 00:17:38.300 "data_offset": 2048, 00:17:38.300 "data_size": 63488 00:17:38.300 }, 00:17:38.300 { 00:17:38.300 "name": "BaseBdev4", 00:17:38.300 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:38.300 "is_configured": true, 00:17:38.300 "data_offset": 2048, 00:17:38.300 "data_size": 63488 00:17:38.300 } 00:17:38.300 ] 00:17:38.300 }' 00:17:38.300 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.300 09:36:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.577 09:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:38.577 09:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.577 09:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:38.577 09:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:38.577 09:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.837 09:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.837 09:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.837 09:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.837 09:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.837 09:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.837 09:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.837 "name": "raid_bdev1", 00:17:38.837 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:38.837 "strip_size_kb": 64, 00:17:38.837 "state": "online", 00:17:38.837 "raid_level": "raid5f", 00:17:38.837 "superblock": true, 00:17:38.837 "num_base_bdevs": 4, 00:17:38.837 "num_base_bdevs_discovered": 3, 00:17:38.837 "num_base_bdevs_operational": 3, 00:17:38.837 "base_bdevs_list": [ 00:17:38.837 { 00:17:38.837 "name": null, 00:17:38.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.837 "is_configured": false, 00:17:38.837 "data_offset": 0, 00:17:38.837 "data_size": 63488 00:17:38.837 }, 00:17:38.837 { 00:17:38.837 "name": "BaseBdev2", 00:17:38.837 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:38.837 "is_configured": true, 00:17:38.837 "data_offset": 2048, 00:17:38.837 "data_size": 63488 00:17:38.837 }, 00:17:38.837 { 00:17:38.837 "name": "BaseBdev3", 00:17:38.837 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:38.837 "is_configured": true, 00:17:38.837 "data_offset": 2048, 00:17:38.837 "data_size": 63488 00:17:38.837 }, 00:17:38.837 { 00:17:38.837 "name": "BaseBdev4", 00:17:38.837 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:38.837 "is_configured": true, 00:17:38.837 "data_offset": 2048, 00:17:38.837 "data_size": 63488 00:17:38.837 } 00:17:38.837 ] 00:17:38.837 }' 00:17:38.837 09:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.837 09:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:38.837 09:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.837 09:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:38.837 09:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:38.837 09:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.837 09:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.837 09:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.837 09:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:38.837 09:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.837 09:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.837 [2024-11-15 09:36:27.210171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:38.837 [2024-11-15 09:36:27.210346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.837 [2024-11-15 09:36:27.210400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:38.837 [2024-11-15 09:36:27.210437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.837 [2024-11-15 09:36:27.211095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.837 [2024-11-15 09:36:27.211128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:38.837 [2024-11-15 09:36:27.211238] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:38.837 [2024-11-15 09:36:27.211257] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:38.837 [2024-11-15 09:36:27.211273] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:38.837 [2024-11-15 09:36:27.211290] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:38.837 BaseBdev1 00:17:38.837 09:36:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.837 09:36:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:39.774 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:39.774 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.774 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.774 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:39.774 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:39.774 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:39.774 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.774 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.774 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.774 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.774 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.774 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.774 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.774 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.033 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.033 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.033 "name": "raid_bdev1", 00:17:40.033 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:40.033 "strip_size_kb": 64, 00:17:40.033 "state": "online", 00:17:40.033 "raid_level": "raid5f", 00:17:40.033 "superblock": true, 00:17:40.033 "num_base_bdevs": 4, 00:17:40.033 "num_base_bdevs_discovered": 3, 00:17:40.033 "num_base_bdevs_operational": 3, 00:17:40.033 "base_bdevs_list": [ 00:17:40.033 { 00:17:40.033 "name": null, 00:17:40.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.033 "is_configured": false, 00:17:40.033 "data_offset": 0, 00:17:40.033 "data_size": 63488 00:17:40.033 }, 00:17:40.033 { 00:17:40.033 "name": "BaseBdev2", 00:17:40.033 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:40.033 "is_configured": true, 00:17:40.033 "data_offset": 2048, 00:17:40.033 "data_size": 63488 00:17:40.033 }, 00:17:40.033 { 00:17:40.033 "name": "BaseBdev3", 00:17:40.033 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:40.033 "is_configured": true, 00:17:40.033 "data_offset": 2048, 00:17:40.033 "data_size": 63488 00:17:40.033 }, 00:17:40.033 { 00:17:40.033 "name": "BaseBdev4", 00:17:40.033 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:40.033 "is_configured": true, 00:17:40.033 "data_offset": 2048, 00:17:40.033 "data_size": 63488 00:17:40.033 } 00:17:40.033 ] 00:17:40.033 }' 00:17:40.033 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.033 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.292 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:40.292 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.292 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:40.292 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:40.292 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.292 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.292 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.292 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.292 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.292 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.552 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.552 "name": "raid_bdev1", 00:17:40.552 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:40.552 "strip_size_kb": 64, 00:17:40.552 "state": "online", 00:17:40.552 "raid_level": "raid5f", 00:17:40.552 "superblock": true, 00:17:40.552 "num_base_bdevs": 4, 00:17:40.552 "num_base_bdevs_discovered": 3, 00:17:40.552 "num_base_bdevs_operational": 3, 00:17:40.552 "base_bdevs_list": [ 00:17:40.552 { 00:17:40.552 "name": null, 00:17:40.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.552 "is_configured": false, 00:17:40.552 "data_offset": 0, 00:17:40.552 "data_size": 63488 00:17:40.552 }, 00:17:40.552 { 00:17:40.552 "name": "BaseBdev2", 00:17:40.552 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:40.552 "is_configured": true, 00:17:40.552 "data_offset": 2048, 00:17:40.552 "data_size": 63488 00:17:40.552 }, 00:17:40.552 { 00:17:40.552 "name": "BaseBdev3", 00:17:40.552 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:40.552 "is_configured": true, 00:17:40.552 "data_offset": 2048, 00:17:40.552 "data_size": 63488 00:17:40.552 }, 00:17:40.552 { 00:17:40.552 "name": "BaseBdev4", 00:17:40.552 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:40.552 "is_configured": true, 00:17:40.552 "data_offset": 2048, 00:17:40.552 "data_size": 63488 00:17:40.552 } 00:17:40.552 ] 00:17:40.552 }' 00:17:40.552 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.552 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:40.552 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.552 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:40.552 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:40.552 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:17:40.552 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:40.552 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:40.552 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.552 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:40.552 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.553 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:40.553 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.553 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.553 [2024-11-15 09:36:28.863537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:40.553 [2024-11-15 09:36:28.863829] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:40.553 [2024-11-15 09:36:28.863921] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:40.553 request: 00:17:40.553 { 00:17:40.553 "base_bdev": "BaseBdev1", 00:17:40.553 "raid_bdev": "raid_bdev1", 00:17:40.553 "method": "bdev_raid_add_base_bdev", 00:17:40.553 "req_id": 1 00:17:40.553 } 00:17:40.553 Got JSON-RPC error response 00:17:40.553 response: 00:17:40.553 { 00:17:40.553 "code": -22, 00:17:40.553 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:40.553 } 00:17:40.553 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:40.553 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:17:40.553 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:40.553 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:40.553 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:40.553 09:36:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:41.492 09:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:41.492 09:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.492 09:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.492 09:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.492 09:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.492 09:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:41.492 09:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.492 09:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.492 09:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.492 09:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.492 09:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.492 09:36:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.492 09:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.492 09:36:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.492 09:36:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.492 09:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.492 "name": "raid_bdev1", 00:17:41.492 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:41.492 "strip_size_kb": 64, 00:17:41.492 "state": "online", 00:17:41.492 "raid_level": "raid5f", 00:17:41.492 "superblock": true, 00:17:41.492 "num_base_bdevs": 4, 00:17:41.492 "num_base_bdevs_discovered": 3, 00:17:41.492 "num_base_bdevs_operational": 3, 00:17:41.492 "base_bdevs_list": [ 00:17:41.492 { 00:17:41.492 "name": null, 00:17:41.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.492 "is_configured": false, 00:17:41.492 "data_offset": 0, 00:17:41.492 "data_size": 63488 00:17:41.492 }, 00:17:41.492 { 00:17:41.492 "name": "BaseBdev2", 00:17:41.492 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:41.492 "is_configured": true, 00:17:41.492 "data_offset": 2048, 00:17:41.492 "data_size": 63488 00:17:41.492 }, 00:17:41.492 { 00:17:41.492 "name": "BaseBdev3", 00:17:41.492 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:41.492 "is_configured": true, 00:17:41.492 "data_offset": 2048, 00:17:41.492 "data_size": 63488 00:17:41.492 }, 00:17:41.492 { 00:17:41.492 "name": "BaseBdev4", 00:17:41.492 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:41.492 "is_configured": true, 00:17:41.492 "data_offset": 2048, 00:17:41.492 "data_size": 63488 00:17:41.492 } 00:17:41.492 ] 00:17:41.492 }' 00:17:41.492 09:36:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.493 09:36:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.067 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:42.067 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.067 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:42.067 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:42.067 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.067 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.068 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.068 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.068 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.068 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.068 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.068 "name": "raid_bdev1", 00:17:42.068 "uuid": "395f3388-ae8e-4add-af22-2efc8188aed2", 00:17:42.068 "strip_size_kb": 64, 00:17:42.068 "state": "online", 00:17:42.068 "raid_level": "raid5f", 00:17:42.068 "superblock": true, 00:17:42.068 "num_base_bdevs": 4, 00:17:42.068 "num_base_bdevs_discovered": 3, 00:17:42.068 "num_base_bdevs_operational": 3, 00:17:42.068 "base_bdevs_list": [ 00:17:42.068 { 00:17:42.068 "name": null, 00:17:42.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.068 "is_configured": false, 00:17:42.068 "data_offset": 0, 00:17:42.068 "data_size": 63488 00:17:42.068 }, 00:17:42.068 { 00:17:42.068 "name": "BaseBdev2", 00:17:42.068 "uuid": "a10ea9c0-1ed7-5b6b-8e08-34052dfd6504", 00:17:42.068 "is_configured": true, 00:17:42.068 "data_offset": 2048, 00:17:42.068 "data_size": 63488 00:17:42.068 }, 00:17:42.068 { 00:17:42.068 "name": "BaseBdev3", 00:17:42.068 "uuid": "96115724-2b45-5bed-aea5-2c5b7753b7eb", 00:17:42.068 "is_configured": true, 00:17:42.068 "data_offset": 2048, 00:17:42.068 "data_size": 63488 00:17:42.068 }, 00:17:42.068 { 00:17:42.068 "name": "BaseBdev4", 00:17:42.068 "uuid": "1d1a1618-73a1-54ed-b6ac-ad0e099393a4", 00:17:42.068 "is_configured": true, 00:17:42.068 "data_offset": 2048, 00:17:42.068 "data_size": 63488 00:17:42.068 } 00:17:42.068 ] 00:17:42.068 }' 00:17:42.068 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.068 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:42.068 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.068 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:42.068 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85556 00:17:42.068 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 85556 ']' 00:17:42.068 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 85556 00:17:42.068 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:17:42.068 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:42.068 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85556 00:17:42.068 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:42.068 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:42.068 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85556' 00:17:42.068 killing process with pid 85556 00:17:42.068 Received shutdown signal, test time was about 60.000000 seconds 00:17:42.068 00:17:42.068 Latency(us) 00:17:42.068 [2024-11-15T09:36:30.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.068 [2024-11-15T09:36:30.531Z] =================================================================================================================== 00:17:42.068 [2024-11-15T09:36:30.531Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:42.068 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 85556 00:17:42.068 [2024-11-15 09:36:30.529278] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:42.068 09:36:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 85556 00:17:42.068 [2024-11-15 09:36:30.529442] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:42.068 [2024-11-15 09:36:30.529537] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:42.068 [2024-11-15 09:36:30.529561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:42.638 [2024-11-15 09:36:31.070493] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:44.021 09:36:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:44.021 00:17:44.021 real 0m27.535s 00:17:44.021 user 0m34.466s 00:17:44.021 sys 0m3.279s 00:17:44.021 09:36:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:44.021 ************************************ 00:17:44.021 END TEST raid5f_rebuild_test_sb 00:17:44.021 ************************************ 00:17:44.021 09:36:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.021 09:36:32 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:44.021 09:36:32 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:44.021 09:36:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:44.021 09:36:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:44.021 09:36:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:44.021 ************************************ 00:17:44.021 START TEST raid_state_function_test_sb_4k 00:17:44.021 ************************************ 00:17:44.021 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:17:44.021 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:44.021 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:44.021 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:44.021 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:44.021 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:44.021 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:44.021 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:44.021 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:44.021 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:44.021 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:44.021 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:44.021 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:44.021 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:44.021 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:44.021 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:44.021 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:44.021 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:44.022 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:44.022 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:44.022 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:44.022 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:44.022 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:44.022 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86371 00:17:44.022 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:44.022 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86371' 00:17:44.022 Process raid pid: 86371 00:17:44.022 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86371 00:17:44.022 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 86371 ']' 00:17:44.022 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.022 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:44.022 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.022 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:44.022 09:36:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.022 [2024-11-15 09:36:32.466818] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:17:44.022 [2024-11-15 09:36:32.466971] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.281 [2024-11-15 09:36:32.646931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.541 [2024-11-15 09:36:32.790725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.801 [2024-11-15 09:36:33.028830] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:44.801 [2024-11-15 09:36:33.028988] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.061 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:45.062 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:17:45.062 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:45.062 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.062 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.062 [2024-11-15 09:36:33.316615] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:45.062 [2024-11-15 09:36:33.316773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:45.062 [2024-11-15 09:36:33.316790] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:45.062 [2024-11-15 09:36:33.316801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:45.062 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.062 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:45.062 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.062 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:45.062 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.062 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.062 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:45.062 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.062 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.062 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.062 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.062 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.062 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.062 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.062 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.062 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.062 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.062 "name": "Existed_Raid", 00:17:45.062 "uuid": "b335b84c-a0d1-46ba-8068-48e9c1b30169", 00:17:45.062 "strip_size_kb": 0, 00:17:45.062 "state": "configuring", 00:17:45.062 "raid_level": "raid1", 00:17:45.062 "superblock": true, 00:17:45.062 "num_base_bdevs": 2, 00:17:45.062 "num_base_bdevs_discovered": 0, 00:17:45.062 "num_base_bdevs_operational": 2, 00:17:45.062 "base_bdevs_list": [ 00:17:45.062 { 00:17:45.062 "name": "BaseBdev1", 00:17:45.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.062 "is_configured": false, 00:17:45.062 "data_offset": 0, 00:17:45.062 "data_size": 0 00:17:45.062 }, 00:17:45.062 { 00:17:45.062 "name": "BaseBdev2", 00:17:45.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.062 "is_configured": false, 00:17:45.062 "data_offset": 0, 00:17:45.062 "data_size": 0 00:17:45.062 } 00:17:45.062 ] 00:17:45.062 }' 00:17:45.062 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.062 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.631 [2024-11-15 09:36:33.803745] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:45.631 [2024-11-15 09:36:33.803911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.631 [2024-11-15 09:36:33.815683] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:45.631 [2024-11-15 09:36:33.815776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:45.631 [2024-11-15 09:36:33.815805] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:45.631 [2024-11-15 09:36:33.815832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.631 [2024-11-15 09:36:33.870110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:45.631 BaseBdev1 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.631 [ 00:17:45.631 { 00:17:45.631 "name": "BaseBdev1", 00:17:45.631 "aliases": [ 00:17:45.631 "9a8636e5-373c-4fa4-b0c6-38f7681a863f" 00:17:45.631 ], 00:17:45.631 "product_name": "Malloc disk", 00:17:45.631 "block_size": 4096, 00:17:45.631 "num_blocks": 8192, 00:17:45.631 "uuid": "9a8636e5-373c-4fa4-b0c6-38f7681a863f", 00:17:45.631 "assigned_rate_limits": { 00:17:45.631 "rw_ios_per_sec": 0, 00:17:45.631 "rw_mbytes_per_sec": 0, 00:17:45.631 "r_mbytes_per_sec": 0, 00:17:45.631 "w_mbytes_per_sec": 0 00:17:45.631 }, 00:17:45.631 "claimed": true, 00:17:45.631 "claim_type": "exclusive_write", 00:17:45.631 "zoned": false, 00:17:45.631 "supported_io_types": { 00:17:45.631 "read": true, 00:17:45.631 "write": true, 00:17:45.631 "unmap": true, 00:17:45.631 "flush": true, 00:17:45.631 "reset": true, 00:17:45.631 "nvme_admin": false, 00:17:45.631 "nvme_io": false, 00:17:45.631 "nvme_io_md": false, 00:17:45.631 "write_zeroes": true, 00:17:45.631 "zcopy": true, 00:17:45.631 "get_zone_info": false, 00:17:45.631 "zone_management": false, 00:17:45.631 "zone_append": false, 00:17:45.631 "compare": false, 00:17:45.631 "compare_and_write": false, 00:17:45.631 "abort": true, 00:17:45.631 "seek_hole": false, 00:17:45.631 "seek_data": false, 00:17:45.631 "copy": true, 00:17:45.631 "nvme_iov_md": false 00:17:45.631 }, 00:17:45.631 "memory_domains": [ 00:17:45.631 { 00:17:45.631 "dma_device_id": "system", 00:17:45.631 "dma_device_type": 1 00:17:45.631 }, 00:17:45.631 { 00:17:45.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.631 "dma_device_type": 2 00:17:45.631 } 00:17:45.631 ], 00:17:45.631 "driver_specific": {} 00:17:45.631 } 00:17:45.631 ] 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.631 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.632 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.632 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.632 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.632 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.632 "name": "Existed_Raid", 00:17:45.632 "uuid": "7d246f35-7955-49e0-bb50-f839199fe5c7", 00:17:45.632 "strip_size_kb": 0, 00:17:45.632 "state": "configuring", 00:17:45.632 "raid_level": "raid1", 00:17:45.632 "superblock": true, 00:17:45.632 "num_base_bdevs": 2, 00:17:45.632 "num_base_bdevs_discovered": 1, 00:17:45.632 "num_base_bdevs_operational": 2, 00:17:45.632 "base_bdevs_list": [ 00:17:45.632 { 00:17:45.632 "name": "BaseBdev1", 00:17:45.632 "uuid": "9a8636e5-373c-4fa4-b0c6-38f7681a863f", 00:17:45.632 "is_configured": true, 00:17:45.632 "data_offset": 256, 00:17:45.632 "data_size": 7936 00:17:45.632 }, 00:17:45.632 { 00:17:45.632 "name": "BaseBdev2", 00:17:45.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.632 "is_configured": false, 00:17:45.632 "data_offset": 0, 00:17:45.632 "data_size": 0 00:17:45.632 } 00:17:45.632 ] 00:17:45.632 }' 00:17:45.632 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.632 09:36:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.892 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:45.892 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.892 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.892 [2024-11-15 09:36:34.349351] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:45.892 [2024-11-15 09:36:34.349424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:45.892 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.892 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:45.892 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.892 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.152 [2024-11-15 09:36:34.361371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:46.152 [2024-11-15 09:36:34.363676] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:46.152 [2024-11-15 09:36:34.363721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:46.152 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.152 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:46.152 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:46.152 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:46.152 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.152 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:46.152 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.152 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.152 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:46.152 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.152 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.152 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.152 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.152 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.152 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.152 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.152 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.152 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.152 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.152 "name": "Existed_Raid", 00:17:46.152 "uuid": "66fa7a95-9b8f-4541-9914-f103245a1367", 00:17:46.152 "strip_size_kb": 0, 00:17:46.152 "state": "configuring", 00:17:46.152 "raid_level": "raid1", 00:17:46.152 "superblock": true, 00:17:46.152 "num_base_bdevs": 2, 00:17:46.152 "num_base_bdevs_discovered": 1, 00:17:46.152 "num_base_bdevs_operational": 2, 00:17:46.152 "base_bdevs_list": [ 00:17:46.152 { 00:17:46.152 "name": "BaseBdev1", 00:17:46.152 "uuid": "9a8636e5-373c-4fa4-b0c6-38f7681a863f", 00:17:46.152 "is_configured": true, 00:17:46.152 "data_offset": 256, 00:17:46.152 "data_size": 7936 00:17:46.152 }, 00:17:46.152 { 00:17:46.152 "name": "BaseBdev2", 00:17:46.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.152 "is_configured": false, 00:17:46.152 "data_offset": 0, 00:17:46.152 "data_size": 0 00:17:46.152 } 00:17:46.152 ] 00:17:46.152 }' 00:17:46.152 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.152 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.419 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:46.420 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.420 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.683 [2024-11-15 09:36:34.906113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:46.683 [2024-11-15 09:36:34.906584] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:46.683 [2024-11-15 09:36:34.906640] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:46.683 [2024-11-15 09:36:34.907013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:46.683 [2024-11-15 09:36:34.907262] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:46.683 [2024-11-15 09:36:34.907318] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev2 00:17:46.683 id_bdev 0x617000007e80 00:17:46.683 [2024-11-15 09:36:34.907563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.683 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.683 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:46.683 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:46.683 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:46.683 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:17:46.683 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:46.683 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:46.683 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:46.683 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.683 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.683 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.683 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:46.683 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.683 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.683 [ 00:17:46.683 { 00:17:46.683 "name": "BaseBdev2", 00:17:46.683 "aliases": [ 00:17:46.683 "05b9d9eb-5e19-4ce3-a5b0-e648f798705c" 00:17:46.683 ], 00:17:46.683 "product_name": "Malloc disk", 00:17:46.683 "block_size": 4096, 00:17:46.683 "num_blocks": 8192, 00:17:46.683 "uuid": "05b9d9eb-5e19-4ce3-a5b0-e648f798705c", 00:17:46.683 "assigned_rate_limits": { 00:17:46.683 "rw_ios_per_sec": 0, 00:17:46.683 "rw_mbytes_per_sec": 0, 00:17:46.683 "r_mbytes_per_sec": 0, 00:17:46.683 "w_mbytes_per_sec": 0 00:17:46.683 }, 00:17:46.683 "claimed": true, 00:17:46.683 "claim_type": "exclusive_write", 00:17:46.683 "zoned": false, 00:17:46.683 "supported_io_types": { 00:17:46.683 "read": true, 00:17:46.683 "write": true, 00:17:46.683 "unmap": true, 00:17:46.683 "flush": true, 00:17:46.683 "reset": true, 00:17:46.683 "nvme_admin": false, 00:17:46.683 "nvme_io": false, 00:17:46.683 "nvme_io_md": false, 00:17:46.683 "write_zeroes": true, 00:17:46.683 "zcopy": true, 00:17:46.683 "get_zone_info": false, 00:17:46.683 "zone_management": false, 00:17:46.683 "zone_append": false, 00:17:46.683 "compare": false, 00:17:46.683 "compare_and_write": false, 00:17:46.683 "abort": true, 00:17:46.683 "seek_hole": false, 00:17:46.683 "seek_data": false, 00:17:46.683 "copy": true, 00:17:46.683 "nvme_iov_md": false 00:17:46.683 }, 00:17:46.683 "memory_domains": [ 00:17:46.683 { 00:17:46.683 "dma_device_id": "system", 00:17:46.683 "dma_device_type": 1 00:17:46.683 }, 00:17:46.683 { 00:17:46.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.683 "dma_device_type": 2 00:17:46.683 } 00:17:46.683 ], 00:17:46.683 "driver_specific": {} 00:17:46.683 } 00:17:46.683 ] 00:17:46.683 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.683 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:17:46.683 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:46.683 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:46.684 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:46.684 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.684 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.684 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.684 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.684 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:46.684 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.684 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.684 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.684 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.684 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.684 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.684 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.684 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.684 09:36:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.684 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.684 "name": "Existed_Raid", 00:17:46.684 "uuid": "66fa7a95-9b8f-4541-9914-f103245a1367", 00:17:46.684 "strip_size_kb": 0, 00:17:46.684 "state": "online", 00:17:46.684 "raid_level": "raid1", 00:17:46.684 "superblock": true, 00:17:46.684 "num_base_bdevs": 2, 00:17:46.684 "num_base_bdevs_discovered": 2, 00:17:46.684 "num_base_bdevs_operational": 2, 00:17:46.684 "base_bdevs_list": [ 00:17:46.684 { 00:17:46.684 "name": "BaseBdev1", 00:17:46.684 "uuid": "9a8636e5-373c-4fa4-b0c6-38f7681a863f", 00:17:46.684 "is_configured": true, 00:17:46.684 "data_offset": 256, 00:17:46.684 "data_size": 7936 00:17:46.684 }, 00:17:46.684 { 00:17:46.684 "name": "BaseBdev2", 00:17:46.684 "uuid": "05b9d9eb-5e19-4ce3-a5b0-e648f798705c", 00:17:46.684 "is_configured": true, 00:17:46.684 "data_offset": 256, 00:17:46.684 "data_size": 7936 00:17:46.684 } 00:17:46.684 ] 00:17:46.684 }' 00:17:46.684 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.684 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:46.944 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:46.944 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:46.944 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:46.944 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:46.944 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:46.944 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:46.944 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:46.944 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:46.944 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.944 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.205 [2024-11-15 09:36:35.409631] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:47.205 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.205 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:47.205 "name": "Existed_Raid", 00:17:47.205 "aliases": [ 00:17:47.205 "66fa7a95-9b8f-4541-9914-f103245a1367" 00:17:47.205 ], 00:17:47.205 "product_name": "Raid Volume", 00:17:47.205 "block_size": 4096, 00:17:47.205 "num_blocks": 7936, 00:17:47.205 "uuid": "66fa7a95-9b8f-4541-9914-f103245a1367", 00:17:47.205 "assigned_rate_limits": { 00:17:47.205 "rw_ios_per_sec": 0, 00:17:47.205 "rw_mbytes_per_sec": 0, 00:17:47.205 "r_mbytes_per_sec": 0, 00:17:47.205 "w_mbytes_per_sec": 0 00:17:47.205 }, 00:17:47.205 "claimed": false, 00:17:47.205 "zoned": false, 00:17:47.205 "supported_io_types": { 00:17:47.205 "read": true, 00:17:47.205 "write": true, 00:17:47.205 "unmap": false, 00:17:47.205 "flush": false, 00:17:47.205 "reset": true, 00:17:47.205 "nvme_admin": false, 00:17:47.205 "nvme_io": false, 00:17:47.205 "nvme_io_md": false, 00:17:47.205 "write_zeroes": true, 00:17:47.205 "zcopy": false, 00:17:47.205 "get_zone_info": false, 00:17:47.205 "zone_management": false, 00:17:47.205 "zone_append": false, 00:17:47.205 "compare": false, 00:17:47.205 "compare_and_write": false, 00:17:47.205 "abort": false, 00:17:47.205 "seek_hole": false, 00:17:47.205 "seek_data": false, 00:17:47.205 "copy": false, 00:17:47.205 "nvme_iov_md": false 00:17:47.205 }, 00:17:47.205 "memory_domains": [ 00:17:47.205 { 00:17:47.205 "dma_device_id": "system", 00:17:47.205 "dma_device_type": 1 00:17:47.205 }, 00:17:47.205 { 00:17:47.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.205 "dma_device_type": 2 00:17:47.205 }, 00:17:47.205 { 00:17:47.206 "dma_device_id": "system", 00:17:47.206 "dma_device_type": 1 00:17:47.206 }, 00:17:47.206 { 00:17:47.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.206 "dma_device_type": 2 00:17:47.206 } 00:17:47.206 ], 00:17:47.206 "driver_specific": { 00:17:47.206 "raid": { 00:17:47.206 "uuid": "66fa7a95-9b8f-4541-9914-f103245a1367", 00:17:47.206 "strip_size_kb": 0, 00:17:47.206 "state": "online", 00:17:47.206 "raid_level": "raid1", 00:17:47.206 "superblock": true, 00:17:47.206 "num_base_bdevs": 2, 00:17:47.206 "num_base_bdevs_discovered": 2, 00:17:47.206 "num_base_bdevs_operational": 2, 00:17:47.206 "base_bdevs_list": [ 00:17:47.206 { 00:17:47.206 "name": "BaseBdev1", 00:17:47.206 "uuid": "9a8636e5-373c-4fa4-b0c6-38f7681a863f", 00:17:47.206 "is_configured": true, 00:17:47.206 "data_offset": 256, 00:17:47.206 "data_size": 7936 00:17:47.206 }, 00:17:47.206 { 00:17:47.206 "name": "BaseBdev2", 00:17:47.206 "uuid": "05b9d9eb-5e19-4ce3-a5b0-e648f798705c", 00:17:47.206 "is_configured": true, 00:17:47.206 "data_offset": 256, 00:17:47.206 "data_size": 7936 00:17:47.206 } 00:17:47.206 ] 00:17:47.206 } 00:17:47.206 } 00:17:47.206 }' 00:17:47.206 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:47.206 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:47.206 BaseBdev2' 00:17:47.206 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.206 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:47.206 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:47.206 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:47.206 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.206 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.206 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.206 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.206 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:47.206 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:47.206 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:47.206 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:47.206 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.206 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.206 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.206 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.206 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:47.206 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:47.206 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:47.206 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.206 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.206 [2024-11-15 09:36:35.645035] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:47.465 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.465 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:47.465 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:47.465 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:47.465 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:47.465 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:47.465 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:47.465 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:47.465 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.465 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.465 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.465 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:47.465 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.465 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.465 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.465 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.465 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.465 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.465 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.466 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.466 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.466 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.466 "name": "Existed_Raid", 00:17:47.466 "uuid": "66fa7a95-9b8f-4541-9914-f103245a1367", 00:17:47.466 "strip_size_kb": 0, 00:17:47.466 "state": "online", 00:17:47.466 "raid_level": "raid1", 00:17:47.466 "superblock": true, 00:17:47.466 "num_base_bdevs": 2, 00:17:47.466 "num_base_bdevs_discovered": 1, 00:17:47.466 "num_base_bdevs_operational": 1, 00:17:47.466 "base_bdevs_list": [ 00:17:47.466 { 00:17:47.466 "name": null, 00:17:47.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.466 "is_configured": false, 00:17:47.466 "data_offset": 0, 00:17:47.466 "data_size": 7936 00:17:47.466 }, 00:17:47.466 { 00:17:47.466 "name": "BaseBdev2", 00:17:47.466 "uuid": "05b9d9eb-5e19-4ce3-a5b0-e648f798705c", 00:17:47.466 "is_configured": true, 00:17:47.466 "data_offset": 256, 00:17:47.466 "data_size": 7936 00:17:47.466 } 00:17:47.466 ] 00:17:47.466 }' 00:17:47.466 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.466 09:36:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.035 [2024-11-15 09:36:36.253547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:48.035 [2024-11-15 09:36:36.253687] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.035 [2024-11-15 09:36:36.362808] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.035 [2024-11-15 09:36:36.362915] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:48.035 [2024-11-15 09:36:36.362933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86371 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 86371 ']' 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 86371 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86371 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:48.035 killing process with pid 86371 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86371' 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@971 -- # kill 86371 00:17:48.035 [2024-11-15 09:36:36.454190] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:48.035 09:36:36 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@976 -- # wait 86371 00:17:48.035 [2024-11-15 09:36:36.473136] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:49.416 09:36:37 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:49.416 00:17:49.416 real 0m5.389s 00:17:49.416 user 0m7.581s 00:17:49.416 sys 0m1.019s 00:17:49.416 09:36:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:49.416 09:36:37 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.416 ************************************ 00:17:49.416 END TEST raid_state_function_test_sb_4k 00:17:49.416 ************************************ 00:17:49.416 09:36:37 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:49.416 09:36:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:49.416 09:36:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:49.416 09:36:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:49.416 ************************************ 00:17:49.416 START TEST raid_superblock_test_4k 00:17:49.416 ************************************ 00:17:49.416 09:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:17:49.416 09:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:49.417 09:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:49.417 09:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:49.417 09:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:49.417 09:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:49.417 09:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:49.417 09:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:49.417 09:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:49.417 09:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:49.417 09:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:49.417 09:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:49.417 09:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:49.417 09:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:49.417 09:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:49.417 09:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:49.417 09:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86619 00:17:49.417 09:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:49.417 09:36:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86619 00:17:49.417 09:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # '[' -z 86619 ']' 00:17:49.417 09:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.417 09:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:49.417 09:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.417 09:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:49.417 09:36:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.676 [2024-11-15 09:36:37.914690] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:17:49.676 [2024-11-15 09:36:37.914994] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86619 ] 00:17:49.676 [2024-11-15 09:36:38.096143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.936 [2024-11-15 09:36:38.246243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.196 [2024-11-15 09:36:38.497952] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:50.196 [2024-11-15 09:36:38.498042] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@866 -- # return 0 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.455 malloc1 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.455 [2024-11-15 09:36:38.848572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:50.455 [2024-11-15 09:36:38.848746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.455 [2024-11-15 09:36:38.848797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:50.455 [2024-11-15 09:36:38.848830] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.455 [2024-11-15 09:36:38.851615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.455 [2024-11-15 09:36:38.851701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:50.455 pt1 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.455 malloc2 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.455 09:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.455 [2024-11-15 09:36:38.917136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:50.455 [2024-11-15 09:36:38.917216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.455 [2024-11-15 09:36:38.917241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:50.455 [2024-11-15 09:36:38.917252] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.455 [2024-11-15 09:36:38.919923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.716 [2024-11-15 09:36:38.920065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:50.716 pt2 00:17:50.716 09:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.716 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:50.716 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:50.716 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:50.716 09:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.716 09:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.716 [2024-11-15 09:36:38.929202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:50.716 [2024-11-15 09:36:38.931541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:50.716 [2024-11-15 09:36:38.931865] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:50.716 [2024-11-15 09:36:38.931907] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:50.716 [2024-11-15 09:36:38.932229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:50.716 [2024-11-15 09:36:38.932437] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:50.716 [2024-11-15 09:36:38.932456] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:50.716 [2024-11-15 09:36:38.932646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.716 09:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.716 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:50.716 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.716 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.716 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.716 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.716 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:50.716 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.716 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.716 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.716 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.716 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.716 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.716 09:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.716 09:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.716 09:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.716 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.716 "name": "raid_bdev1", 00:17:50.716 "uuid": "5ff5dfed-6f2d-4c79-a35c-4c22b90c6f42", 00:17:50.716 "strip_size_kb": 0, 00:17:50.716 "state": "online", 00:17:50.716 "raid_level": "raid1", 00:17:50.716 "superblock": true, 00:17:50.716 "num_base_bdevs": 2, 00:17:50.716 "num_base_bdevs_discovered": 2, 00:17:50.716 "num_base_bdevs_operational": 2, 00:17:50.716 "base_bdevs_list": [ 00:17:50.716 { 00:17:50.716 "name": "pt1", 00:17:50.716 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:50.716 "is_configured": true, 00:17:50.716 "data_offset": 256, 00:17:50.716 "data_size": 7936 00:17:50.716 }, 00:17:50.716 { 00:17:50.716 "name": "pt2", 00:17:50.716 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.716 "is_configured": true, 00:17:50.716 "data_offset": 256, 00:17:50.716 "data_size": 7936 00:17:50.716 } 00:17:50.716 ] 00:17:50.716 }' 00:17:50.716 09:36:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.716 09:36:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.976 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:50.976 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:50.977 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:50.977 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:50.977 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:50.977 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:50.977 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:50.977 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:50.977 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.977 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:50.977 [2024-11-15 09:36:39.384715] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.977 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.977 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:50.977 "name": "raid_bdev1", 00:17:50.977 "aliases": [ 00:17:50.977 "5ff5dfed-6f2d-4c79-a35c-4c22b90c6f42" 00:17:50.977 ], 00:17:50.977 "product_name": "Raid Volume", 00:17:50.977 "block_size": 4096, 00:17:50.977 "num_blocks": 7936, 00:17:50.977 "uuid": "5ff5dfed-6f2d-4c79-a35c-4c22b90c6f42", 00:17:50.977 "assigned_rate_limits": { 00:17:50.977 "rw_ios_per_sec": 0, 00:17:50.977 "rw_mbytes_per_sec": 0, 00:17:50.977 "r_mbytes_per_sec": 0, 00:17:50.977 "w_mbytes_per_sec": 0 00:17:50.977 }, 00:17:50.977 "claimed": false, 00:17:50.977 "zoned": false, 00:17:50.977 "supported_io_types": { 00:17:50.977 "read": true, 00:17:50.977 "write": true, 00:17:50.977 "unmap": false, 00:17:50.977 "flush": false, 00:17:50.977 "reset": true, 00:17:50.977 "nvme_admin": false, 00:17:50.977 "nvme_io": false, 00:17:50.977 "nvme_io_md": false, 00:17:50.977 "write_zeroes": true, 00:17:50.977 "zcopy": false, 00:17:50.977 "get_zone_info": false, 00:17:50.977 "zone_management": false, 00:17:50.977 "zone_append": false, 00:17:50.977 "compare": false, 00:17:50.977 "compare_and_write": false, 00:17:50.977 "abort": false, 00:17:50.977 "seek_hole": false, 00:17:50.977 "seek_data": false, 00:17:50.977 "copy": false, 00:17:50.977 "nvme_iov_md": false 00:17:50.977 }, 00:17:50.977 "memory_domains": [ 00:17:50.977 { 00:17:50.977 "dma_device_id": "system", 00:17:50.977 "dma_device_type": 1 00:17:50.977 }, 00:17:50.977 { 00:17:50.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.977 "dma_device_type": 2 00:17:50.977 }, 00:17:50.977 { 00:17:50.977 "dma_device_id": "system", 00:17:50.977 "dma_device_type": 1 00:17:50.977 }, 00:17:50.977 { 00:17:50.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.977 "dma_device_type": 2 00:17:50.977 } 00:17:50.977 ], 00:17:50.977 "driver_specific": { 00:17:50.977 "raid": { 00:17:50.977 "uuid": "5ff5dfed-6f2d-4c79-a35c-4c22b90c6f42", 00:17:50.977 "strip_size_kb": 0, 00:17:50.977 "state": "online", 00:17:50.977 "raid_level": "raid1", 00:17:50.977 "superblock": true, 00:17:50.977 "num_base_bdevs": 2, 00:17:50.977 "num_base_bdevs_discovered": 2, 00:17:50.977 "num_base_bdevs_operational": 2, 00:17:50.977 "base_bdevs_list": [ 00:17:50.977 { 00:17:50.977 "name": "pt1", 00:17:50.977 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:50.977 "is_configured": true, 00:17:50.977 "data_offset": 256, 00:17:50.977 "data_size": 7936 00:17:50.977 }, 00:17:50.977 { 00:17:50.977 "name": "pt2", 00:17:50.977 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.977 "is_configured": true, 00:17:50.977 "data_offset": 256, 00:17:50.977 "data_size": 7936 00:17:50.977 } 00:17:50.977 ] 00:17:50.977 } 00:17:50.977 } 00:17:50.977 }' 00:17:50.977 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:51.237 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:51.237 pt2' 00:17:51.237 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:51.237 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:51.237 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.238 [2024-11-15 09:36:39.588404] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5ff5dfed-6f2d-4c79-a35c-4c22b90c6f42 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 5ff5dfed-6f2d-4c79-a35c-4c22b90c6f42 ']' 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.238 [2024-11-15 09:36:39.627984] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:51.238 [2024-11-15 09:36:39.628024] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:51.238 [2024-11-15 09:36:39.628170] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:51.238 [2024-11-15 09:36:39.628243] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:51.238 [2024-11-15 09:36:39.628257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.238 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.499 [2024-11-15 09:36:39.767786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:51.499 [2024-11-15 09:36:39.770338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:51.499 [2024-11-15 09:36:39.770486] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:51.499 [2024-11-15 09:36:39.770621] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:51.499 [2024-11-15 09:36:39.770696] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:51.499 [2024-11-15 09:36:39.770734] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:51.499 request: 00:17:51.499 { 00:17:51.499 "name": "raid_bdev1", 00:17:51.499 "raid_level": "raid1", 00:17:51.499 "base_bdevs": [ 00:17:51.499 "malloc1", 00:17:51.499 "malloc2" 00:17:51.499 ], 00:17:51.499 "superblock": false, 00:17:51.499 "method": "bdev_raid_create", 00:17:51.499 "req_id": 1 00:17:51.499 } 00:17:51.499 Got JSON-RPC error response 00:17:51.499 response: 00:17:51.499 { 00:17:51.499 "code": -17, 00:17:51.499 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:51.499 } 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.499 [2024-11-15 09:36:39.835641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:51.499 [2024-11-15 09:36:39.835821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.499 [2024-11-15 09:36:39.835876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:51.499 [2024-11-15 09:36:39.835935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.499 [2024-11-15 09:36:39.838822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.499 [2024-11-15 09:36:39.838969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:51.499 [2024-11-15 09:36:39.839123] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:51.499 [2024-11-15 09:36:39.839230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:51.499 pt1 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:51.499 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.500 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.500 "name": "raid_bdev1", 00:17:51.500 "uuid": "5ff5dfed-6f2d-4c79-a35c-4c22b90c6f42", 00:17:51.500 "strip_size_kb": 0, 00:17:51.500 "state": "configuring", 00:17:51.500 "raid_level": "raid1", 00:17:51.500 "superblock": true, 00:17:51.500 "num_base_bdevs": 2, 00:17:51.500 "num_base_bdevs_discovered": 1, 00:17:51.500 "num_base_bdevs_operational": 2, 00:17:51.500 "base_bdevs_list": [ 00:17:51.500 { 00:17:51.500 "name": "pt1", 00:17:51.500 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:51.500 "is_configured": true, 00:17:51.500 "data_offset": 256, 00:17:51.500 "data_size": 7936 00:17:51.500 }, 00:17:51.500 { 00:17:51.500 "name": null, 00:17:51.500 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:51.500 "is_configured": false, 00:17:51.500 "data_offset": 256, 00:17:51.500 "data_size": 7936 00:17:51.500 } 00:17:51.500 ] 00:17:51.500 }' 00:17:51.500 09:36:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.500 09:36:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.069 [2024-11-15 09:36:40.278924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:52.069 [2024-11-15 09:36:40.279125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.069 [2024-11-15 09:36:40.279172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:52.069 [2024-11-15 09:36:40.279205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.069 [2024-11-15 09:36:40.279857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.069 [2024-11-15 09:36:40.279958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:52.069 [2024-11-15 09:36:40.280120] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:52.069 [2024-11-15 09:36:40.280188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:52.069 [2024-11-15 09:36:40.280370] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:52.069 [2024-11-15 09:36:40.280416] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:52.069 [2024-11-15 09:36:40.280738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:52.069 [2024-11-15 09:36:40.280994] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:52.069 [2024-11-15 09:36:40.281043] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:52.069 [2024-11-15 09:36:40.281271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.069 pt2 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.069 "name": "raid_bdev1", 00:17:52.069 "uuid": "5ff5dfed-6f2d-4c79-a35c-4c22b90c6f42", 00:17:52.069 "strip_size_kb": 0, 00:17:52.069 "state": "online", 00:17:52.069 "raid_level": "raid1", 00:17:52.069 "superblock": true, 00:17:52.069 "num_base_bdevs": 2, 00:17:52.069 "num_base_bdevs_discovered": 2, 00:17:52.069 "num_base_bdevs_operational": 2, 00:17:52.069 "base_bdevs_list": [ 00:17:52.069 { 00:17:52.069 "name": "pt1", 00:17:52.069 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:52.069 "is_configured": true, 00:17:52.069 "data_offset": 256, 00:17:52.069 "data_size": 7936 00:17:52.069 }, 00:17:52.069 { 00:17:52.069 "name": "pt2", 00:17:52.069 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.069 "is_configured": true, 00:17:52.069 "data_offset": 256, 00:17:52.069 "data_size": 7936 00:17:52.069 } 00:17:52.069 ] 00:17:52.069 }' 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.069 09:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.330 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:52.330 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:52.330 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:52.330 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:52.330 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:52.330 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:52.330 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:52.330 09:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.330 09:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.330 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:52.330 [2024-11-15 09:36:40.742334] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.330 09:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.330 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:52.330 "name": "raid_bdev1", 00:17:52.330 "aliases": [ 00:17:52.330 "5ff5dfed-6f2d-4c79-a35c-4c22b90c6f42" 00:17:52.330 ], 00:17:52.330 "product_name": "Raid Volume", 00:17:52.330 "block_size": 4096, 00:17:52.330 "num_blocks": 7936, 00:17:52.330 "uuid": "5ff5dfed-6f2d-4c79-a35c-4c22b90c6f42", 00:17:52.330 "assigned_rate_limits": { 00:17:52.330 "rw_ios_per_sec": 0, 00:17:52.330 "rw_mbytes_per_sec": 0, 00:17:52.330 "r_mbytes_per_sec": 0, 00:17:52.330 "w_mbytes_per_sec": 0 00:17:52.330 }, 00:17:52.330 "claimed": false, 00:17:52.330 "zoned": false, 00:17:52.330 "supported_io_types": { 00:17:52.330 "read": true, 00:17:52.330 "write": true, 00:17:52.330 "unmap": false, 00:17:52.330 "flush": false, 00:17:52.330 "reset": true, 00:17:52.330 "nvme_admin": false, 00:17:52.330 "nvme_io": false, 00:17:52.330 "nvme_io_md": false, 00:17:52.330 "write_zeroes": true, 00:17:52.330 "zcopy": false, 00:17:52.330 "get_zone_info": false, 00:17:52.330 "zone_management": false, 00:17:52.330 "zone_append": false, 00:17:52.330 "compare": false, 00:17:52.330 "compare_and_write": false, 00:17:52.330 "abort": false, 00:17:52.330 "seek_hole": false, 00:17:52.330 "seek_data": false, 00:17:52.330 "copy": false, 00:17:52.330 "nvme_iov_md": false 00:17:52.330 }, 00:17:52.330 "memory_domains": [ 00:17:52.330 { 00:17:52.330 "dma_device_id": "system", 00:17:52.330 "dma_device_type": 1 00:17:52.330 }, 00:17:52.330 { 00:17:52.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.330 "dma_device_type": 2 00:17:52.330 }, 00:17:52.330 { 00:17:52.330 "dma_device_id": "system", 00:17:52.330 "dma_device_type": 1 00:17:52.330 }, 00:17:52.330 { 00:17:52.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.330 "dma_device_type": 2 00:17:52.330 } 00:17:52.330 ], 00:17:52.330 "driver_specific": { 00:17:52.330 "raid": { 00:17:52.330 "uuid": "5ff5dfed-6f2d-4c79-a35c-4c22b90c6f42", 00:17:52.330 "strip_size_kb": 0, 00:17:52.330 "state": "online", 00:17:52.330 "raid_level": "raid1", 00:17:52.330 "superblock": true, 00:17:52.330 "num_base_bdevs": 2, 00:17:52.330 "num_base_bdevs_discovered": 2, 00:17:52.330 "num_base_bdevs_operational": 2, 00:17:52.330 "base_bdevs_list": [ 00:17:52.330 { 00:17:52.330 "name": "pt1", 00:17:52.330 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:52.330 "is_configured": true, 00:17:52.330 "data_offset": 256, 00:17:52.330 "data_size": 7936 00:17:52.330 }, 00:17:52.330 { 00:17:52.330 "name": "pt2", 00:17:52.330 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.330 "is_configured": true, 00:17:52.330 "data_offset": 256, 00:17:52.330 "data_size": 7936 00:17:52.330 } 00:17:52.330 ] 00:17:52.330 } 00:17:52.330 } 00:17:52.330 }' 00:17:52.330 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:52.590 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:52.590 pt2' 00:17:52.590 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.590 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:52.590 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:52.590 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.590 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:52.590 09:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.590 09:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.590 09:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.590 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:52.590 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:52.590 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:52.590 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:52.590 09:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.590 09:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.590 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.590 09:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.590 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:52.590 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:52.590 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:52.590 09:36:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:52.590 09:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.590 09:36:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.590 [2024-11-15 09:36:40.985989] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.590 09:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.590 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 5ff5dfed-6f2d-4c79-a35c-4c22b90c6f42 '!=' 5ff5dfed-6f2d-4c79-a35c-4c22b90c6f42 ']' 00:17:52.590 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:52.590 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:52.590 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:52.590 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:52.590 09:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.590 09:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.590 [2024-11-15 09:36:41.033648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:52.590 09:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.590 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:52.590 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.590 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.590 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.590 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.590 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:52.590 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.590 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.591 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.591 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.591 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.591 09:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.591 09:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:52.591 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.850 09:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.850 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.850 "name": "raid_bdev1", 00:17:52.850 "uuid": "5ff5dfed-6f2d-4c79-a35c-4c22b90c6f42", 00:17:52.850 "strip_size_kb": 0, 00:17:52.850 "state": "online", 00:17:52.850 "raid_level": "raid1", 00:17:52.850 "superblock": true, 00:17:52.850 "num_base_bdevs": 2, 00:17:52.850 "num_base_bdevs_discovered": 1, 00:17:52.850 "num_base_bdevs_operational": 1, 00:17:52.850 "base_bdevs_list": [ 00:17:52.850 { 00:17:52.850 "name": null, 00:17:52.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.850 "is_configured": false, 00:17:52.850 "data_offset": 0, 00:17:52.850 "data_size": 7936 00:17:52.850 }, 00:17:52.850 { 00:17:52.850 "name": "pt2", 00:17:52.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.850 "is_configured": true, 00:17:52.850 "data_offset": 256, 00:17:52.850 "data_size": 7936 00:17:52.850 } 00:17:52.850 ] 00:17:52.850 }' 00:17:52.850 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.850 09:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.110 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:53.110 09:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.110 09:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.110 [2024-11-15 09:36:41.516771] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:53.110 [2024-11-15 09:36:41.516918] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:53.110 [2024-11-15 09:36:41.517062] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.110 [2024-11-15 09:36:41.517155] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.110 [2024-11-15 09:36:41.517219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:53.110 09:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.110 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.110 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:53.110 09:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.110 09:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.110 09:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.110 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:53.110 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:53.110 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:53.110 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:53.110 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:53.110 09:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.110 09:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.370 [2024-11-15 09:36:41.584605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:53.370 [2024-11-15 09:36:41.584768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.370 [2024-11-15 09:36:41.584821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:53.370 [2024-11-15 09:36:41.584867] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.370 [2024-11-15 09:36:41.587640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.370 [2024-11-15 09:36:41.587723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:53.370 [2024-11-15 09:36:41.587888] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:53.370 [2024-11-15 09:36:41.587988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:53.370 [2024-11-15 09:36:41.588172] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:53.370 [2024-11-15 09:36:41.588223] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:53.370 [2024-11-15 09:36:41.588545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:53.370 [2024-11-15 09:36:41.588786] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:53.370 [2024-11-15 09:36:41.588835] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:53.370 [2024-11-15 09:36:41.589143] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.370 pt2 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.370 "name": "raid_bdev1", 00:17:53.370 "uuid": "5ff5dfed-6f2d-4c79-a35c-4c22b90c6f42", 00:17:53.370 "strip_size_kb": 0, 00:17:53.370 "state": "online", 00:17:53.370 "raid_level": "raid1", 00:17:53.370 "superblock": true, 00:17:53.370 "num_base_bdevs": 2, 00:17:53.370 "num_base_bdevs_discovered": 1, 00:17:53.370 "num_base_bdevs_operational": 1, 00:17:53.370 "base_bdevs_list": [ 00:17:53.370 { 00:17:53.370 "name": null, 00:17:53.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.370 "is_configured": false, 00:17:53.370 "data_offset": 256, 00:17:53.370 "data_size": 7936 00:17:53.370 }, 00:17:53.370 { 00:17:53.370 "name": "pt2", 00:17:53.370 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:53.370 "is_configured": true, 00:17:53.370 "data_offset": 256, 00:17:53.370 "data_size": 7936 00:17:53.370 } 00:17:53.370 ] 00:17:53.370 }' 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.370 09:36:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.630 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:53.630 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.630 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.630 [2024-11-15 09:36:42.056294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:53.630 [2024-11-15 09:36:42.056344] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:53.630 [2024-11-15 09:36:42.056450] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.631 [2024-11-15 09:36:42.056518] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.631 [2024-11-15 09:36:42.056530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:53.631 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.631 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.631 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:53.631 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.631 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.631 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.890 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:53.890 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:53.890 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:53.890 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:53.890 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.890 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.890 [2024-11-15 09:36:42.120257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:53.890 [2024-11-15 09:36:42.120455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.890 [2024-11-15 09:36:42.120486] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:53.890 [2024-11-15 09:36:42.120498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.890 [2024-11-15 09:36:42.123382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.890 [2024-11-15 09:36:42.123425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:53.890 [2024-11-15 09:36:42.123570] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:53.890 [2024-11-15 09:36:42.123626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:53.890 [2024-11-15 09:36:42.123797] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:53.890 [2024-11-15 09:36:42.123808] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:53.890 [2024-11-15 09:36:42.123827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:53.890 [2024-11-15 09:36:42.124005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:53.890 [2024-11-15 09:36:42.124162] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:53.890 [2024-11-15 09:36:42.124212] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:53.890 [2024-11-15 09:36:42.124571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:53.890 [2024-11-15 09:36:42.124799] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:53.890 [2024-11-15 09:36:42.124866] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:53.890 [2024-11-15 09:36:42.125146] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.890 pt1 00:17:53.890 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.890 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:53.890 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:53.890 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.890 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.890 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.890 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.890 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:53.890 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.891 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.891 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.891 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.891 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.891 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.891 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.891 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:53.891 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.891 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.891 "name": "raid_bdev1", 00:17:53.891 "uuid": "5ff5dfed-6f2d-4c79-a35c-4c22b90c6f42", 00:17:53.891 "strip_size_kb": 0, 00:17:53.891 "state": "online", 00:17:53.891 "raid_level": "raid1", 00:17:53.891 "superblock": true, 00:17:53.891 "num_base_bdevs": 2, 00:17:53.891 "num_base_bdevs_discovered": 1, 00:17:53.891 "num_base_bdevs_operational": 1, 00:17:53.891 "base_bdevs_list": [ 00:17:53.891 { 00:17:53.891 "name": null, 00:17:53.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.891 "is_configured": false, 00:17:53.891 "data_offset": 256, 00:17:53.891 "data_size": 7936 00:17:53.891 }, 00:17:53.891 { 00:17:53.891 "name": "pt2", 00:17:53.891 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:53.891 "is_configured": true, 00:17:53.891 "data_offset": 256, 00:17:53.891 "data_size": 7936 00:17:53.891 } 00:17:53.891 ] 00:17:53.891 }' 00:17:53.891 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.891 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.150 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:54.150 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.150 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.150 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:54.150 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.409 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:54.409 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:54.409 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.409 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:54.409 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:54.409 [2024-11-15 09:36:42.635685] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:54.409 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.409 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 5ff5dfed-6f2d-4c79-a35c-4c22b90c6f42 '!=' 5ff5dfed-6f2d-4c79-a35c-4c22b90c6f42 ']' 00:17:54.409 09:36:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86619 00:17:54.409 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # '[' -z 86619 ']' 00:17:54.409 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # kill -0 86619 00:17:54.409 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # uname 00:17:54.409 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:54.409 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86619 00:17:54.409 killing process with pid 86619 00:17:54.409 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:54.409 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:54.409 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86619' 00:17:54.409 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@971 -- # kill 86619 00:17:54.409 [2024-11-15 09:36:42.710232] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:54.409 [2024-11-15 09:36:42.710362] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:54.409 09:36:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@976 -- # wait 86619 00:17:54.409 [2024-11-15 09:36:42.710426] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:54.409 [2024-11-15 09:36:42.710444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:54.668 [2024-11-15 09:36:42.949503] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:56.050 09:36:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:56.050 ************************************ 00:17:56.050 END TEST raid_superblock_test_4k 00:17:56.050 ************************************ 00:17:56.050 00:17:56.050 real 0m6.398s 00:17:56.050 user 0m9.468s 00:17:56.050 sys 0m1.240s 00:17:56.050 09:36:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:56.050 09:36:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.050 09:36:44 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:56.050 09:36:44 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:56.050 09:36:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:17:56.050 09:36:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:56.050 09:36:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:56.050 ************************************ 00:17:56.050 START TEST raid_rebuild_test_sb_4k 00:17:56.050 ************************************ 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86947 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86947 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 86947 ']' 00:17:56.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:56.050 09:36:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:56.050 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:56.050 Zero copy mechanism will not be used. 00:17:56.050 [2024-11-15 09:36:44.398965] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:17:56.051 [2024-11-15 09:36:44.399119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86947 ] 00:17:56.311 [2024-11-15 09:36:44.582787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.311 [2024-11-15 09:36:44.725398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.570 [2024-11-15 09:36:44.966159] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.571 [2024-11-15 09:36:44.966249] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.830 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:56.830 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:17:56.830 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:56.830 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:56.830 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.830 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.091 BaseBdev1_malloc 00:17:57.091 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.091 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:57.091 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.091 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.091 [2024-11-15 09:36:45.319914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:57.091 [2024-11-15 09:36:45.320001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.091 [2024-11-15 09:36:45.320025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:57.091 [2024-11-15 09:36:45.320038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.091 [2024-11-15 09:36:45.322554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.091 [2024-11-15 09:36:45.322601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:57.091 BaseBdev1 00:17:57.091 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.091 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:57.091 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:57.091 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.091 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.091 BaseBdev2_malloc 00:17:57.091 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.091 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:57.091 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.091 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.091 [2024-11-15 09:36:45.381900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:57.091 [2024-11-15 09:36:45.381990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.091 [2024-11-15 09:36:45.382013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:57.091 [2024-11-15 09:36:45.382025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.092 [2024-11-15 09:36:45.384529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.092 [2024-11-15 09:36:45.384645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:57.092 BaseBdev2 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.092 spare_malloc 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.092 spare_delay 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.092 [2024-11-15 09:36:45.471710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:57.092 [2024-11-15 09:36:45.471804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.092 [2024-11-15 09:36:45.471833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:57.092 [2024-11-15 09:36:45.471846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.092 [2024-11-15 09:36:45.474520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.092 [2024-11-15 09:36:45.474633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:57.092 spare 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.092 [2024-11-15 09:36:45.483771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:57.092 [2024-11-15 09:36:45.486163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:57.092 [2024-11-15 09:36:45.486368] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:57.092 [2024-11-15 09:36:45.486392] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:57.092 [2024-11-15 09:36:45.486692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:57.092 [2024-11-15 09:36:45.486901] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:57.092 [2024-11-15 09:36:45.486912] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:57.092 [2024-11-15 09:36:45.487094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.092 "name": "raid_bdev1", 00:17:57.092 "uuid": "bb4a62d1-99ed-43e1-91a6-517d66930bcc", 00:17:57.092 "strip_size_kb": 0, 00:17:57.092 "state": "online", 00:17:57.092 "raid_level": "raid1", 00:17:57.092 "superblock": true, 00:17:57.092 "num_base_bdevs": 2, 00:17:57.092 "num_base_bdevs_discovered": 2, 00:17:57.092 "num_base_bdevs_operational": 2, 00:17:57.092 "base_bdevs_list": [ 00:17:57.092 { 00:17:57.092 "name": "BaseBdev1", 00:17:57.092 "uuid": "7cf36876-a9e0-5f3b-8249-3945d5b1ee54", 00:17:57.092 "is_configured": true, 00:17:57.092 "data_offset": 256, 00:17:57.092 "data_size": 7936 00:17:57.092 }, 00:17:57.092 { 00:17:57.092 "name": "BaseBdev2", 00:17:57.092 "uuid": "c7eb38e7-d485-5cea-b061-401072b51307", 00:17:57.092 "is_configured": true, 00:17:57.092 "data_offset": 256, 00:17:57.092 "data_size": 7936 00:17:57.092 } 00:17:57.092 ] 00:17:57.092 }' 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.092 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.662 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:57.662 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:57.662 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.662 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.662 [2024-11-15 09:36:45.955306] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.662 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.662 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:57.663 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:57.663 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.663 09:36:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.663 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:57.663 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.663 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:57.663 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:57.663 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:57.663 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:57.663 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:57.663 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:57.663 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:57.663 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:57.663 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:57.663 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:57.663 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:57.663 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:57.663 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:57.663 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:57.922 [2024-11-15 09:36:46.250548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:57.922 /dev/nbd0 00:17:57.922 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:57.922 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:57.922 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:57.922 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:17:57.922 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:57.922 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:57.922 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:57.922 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:17:57.922 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:57.922 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:57.922 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:57.922 1+0 records in 00:17:57.922 1+0 records out 00:17:57.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356807 s, 11.5 MB/s 00:17:57.922 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.922 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:17:57.922 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.922 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:57.922 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:17:57.923 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:57.923 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:57.923 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:57.923 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:57.923 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:58.606 7936+0 records in 00:17:58.606 7936+0 records out 00:17:58.606 32505856 bytes (33 MB, 31 MiB) copied, 0.658908 s, 49.3 MB/s 00:17:58.606 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:58.606 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:58.606 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:58.606 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:58.606 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:58.606 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:58.606 09:36:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:58.882 [2024-11-15 09:36:47.242314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.882 [2024-11-15 09:36:47.258436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.882 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.883 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.883 "name": "raid_bdev1", 00:17:58.883 "uuid": "bb4a62d1-99ed-43e1-91a6-517d66930bcc", 00:17:58.883 "strip_size_kb": 0, 00:17:58.883 "state": "online", 00:17:58.883 "raid_level": "raid1", 00:17:58.883 "superblock": true, 00:17:58.883 "num_base_bdevs": 2, 00:17:58.883 "num_base_bdevs_discovered": 1, 00:17:58.883 "num_base_bdevs_operational": 1, 00:17:58.883 "base_bdevs_list": [ 00:17:58.883 { 00:17:58.883 "name": null, 00:17:58.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.883 "is_configured": false, 00:17:58.883 "data_offset": 0, 00:17:58.883 "data_size": 7936 00:17:58.883 }, 00:17:58.883 { 00:17:58.883 "name": "BaseBdev2", 00:17:58.883 "uuid": "c7eb38e7-d485-5cea-b061-401072b51307", 00:17:58.883 "is_configured": true, 00:17:58.883 "data_offset": 256, 00:17:58.883 "data_size": 7936 00:17:58.883 } 00:17:58.883 ] 00:17:58.883 }' 00:17:58.883 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.883 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.452 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:59.452 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.452 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:59.452 [2024-11-15 09:36:47.709694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:59.452 [2024-11-15 09:36:47.729826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:59.452 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.452 09:36:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:59.452 [2024-11-15 09:36:47.732223] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:00.389 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.389 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.389 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.389 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.389 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.389 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.389 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.389 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.389 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.389 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.389 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.389 "name": "raid_bdev1", 00:18:00.389 "uuid": "bb4a62d1-99ed-43e1-91a6-517d66930bcc", 00:18:00.389 "strip_size_kb": 0, 00:18:00.389 "state": "online", 00:18:00.389 "raid_level": "raid1", 00:18:00.389 "superblock": true, 00:18:00.389 "num_base_bdevs": 2, 00:18:00.389 "num_base_bdevs_discovered": 2, 00:18:00.389 "num_base_bdevs_operational": 2, 00:18:00.389 "process": { 00:18:00.389 "type": "rebuild", 00:18:00.389 "target": "spare", 00:18:00.389 "progress": { 00:18:00.389 "blocks": 2560, 00:18:00.389 "percent": 32 00:18:00.390 } 00:18:00.390 }, 00:18:00.390 "base_bdevs_list": [ 00:18:00.390 { 00:18:00.390 "name": "spare", 00:18:00.390 "uuid": "d25788d2-e281-5b0a-b395-e836ea55b7e2", 00:18:00.390 "is_configured": true, 00:18:00.390 "data_offset": 256, 00:18:00.390 "data_size": 7936 00:18:00.390 }, 00:18:00.390 { 00:18:00.390 "name": "BaseBdev2", 00:18:00.390 "uuid": "c7eb38e7-d485-5cea-b061-401072b51307", 00:18:00.390 "is_configured": true, 00:18:00.390 "data_offset": 256, 00:18:00.390 "data_size": 7936 00:18:00.390 } 00:18:00.390 ] 00:18:00.390 }' 00:18:00.390 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.390 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.390 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.649 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.649 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:00.649 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.649 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.649 [2024-11-15 09:36:48.871145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:00.649 [2024-11-15 09:36:48.943050] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:00.649 [2024-11-15 09:36:48.943190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.649 [2024-11-15 09:36:48.943210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:00.649 [2024-11-15 09:36:48.943222] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:00.649 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.649 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:00.649 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.649 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.649 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.649 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.649 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:00.649 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.649 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.649 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.649 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.649 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.649 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.649 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.649 09:36:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:00.649 09:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.649 09:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.649 "name": "raid_bdev1", 00:18:00.649 "uuid": "bb4a62d1-99ed-43e1-91a6-517d66930bcc", 00:18:00.649 "strip_size_kb": 0, 00:18:00.649 "state": "online", 00:18:00.649 "raid_level": "raid1", 00:18:00.649 "superblock": true, 00:18:00.649 "num_base_bdevs": 2, 00:18:00.649 "num_base_bdevs_discovered": 1, 00:18:00.649 "num_base_bdevs_operational": 1, 00:18:00.649 "base_bdevs_list": [ 00:18:00.649 { 00:18:00.649 "name": null, 00:18:00.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.649 "is_configured": false, 00:18:00.649 "data_offset": 0, 00:18:00.649 "data_size": 7936 00:18:00.649 }, 00:18:00.649 { 00:18:00.649 "name": "BaseBdev2", 00:18:00.649 "uuid": "c7eb38e7-d485-5cea-b061-401072b51307", 00:18:00.649 "is_configured": true, 00:18:00.649 "data_offset": 256, 00:18:00.649 "data_size": 7936 00:18:00.649 } 00:18:00.649 ] 00:18:00.649 }' 00:18:00.649 09:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.649 09:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.219 09:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:01.219 09:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.219 09:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:01.219 09:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:01.219 09:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.219 09:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.219 09:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.219 09:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.219 09:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.220 09:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.220 09:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.220 "name": "raid_bdev1", 00:18:01.220 "uuid": "bb4a62d1-99ed-43e1-91a6-517d66930bcc", 00:18:01.220 "strip_size_kb": 0, 00:18:01.220 "state": "online", 00:18:01.220 "raid_level": "raid1", 00:18:01.220 "superblock": true, 00:18:01.220 "num_base_bdevs": 2, 00:18:01.220 "num_base_bdevs_discovered": 1, 00:18:01.220 "num_base_bdevs_operational": 1, 00:18:01.220 "base_bdevs_list": [ 00:18:01.220 { 00:18:01.220 "name": null, 00:18:01.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.220 "is_configured": false, 00:18:01.220 "data_offset": 0, 00:18:01.220 "data_size": 7936 00:18:01.220 }, 00:18:01.220 { 00:18:01.220 "name": "BaseBdev2", 00:18:01.220 "uuid": "c7eb38e7-d485-5cea-b061-401072b51307", 00:18:01.220 "is_configured": true, 00:18:01.220 "data_offset": 256, 00:18:01.220 "data_size": 7936 00:18:01.220 } 00:18:01.220 ] 00:18:01.220 }' 00:18:01.220 09:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.220 09:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:01.220 09:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.220 09:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:01.220 09:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:01.220 09:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.220 09:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:01.220 [2024-11-15 09:36:49.566638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:01.220 [2024-11-15 09:36:49.586410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:01.220 09:36:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.220 09:36:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:01.220 [2024-11-15 09:36:49.589020] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:02.157 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.157 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.157 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.157 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.157 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.157 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.157 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.157 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.157 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.157 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.416 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.416 "name": "raid_bdev1", 00:18:02.416 "uuid": "bb4a62d1-99ed-43e1-91a6-517d66930bcc", 00:18:02.416 "strip_size_kb": 0, 00:18:02.416 "state": "online", 00:18:02.416 "raid_level": "raid1", 00:18:02.416 "superblock": true, 00:18:02.416 "num_base_bdevs": 2, 00:18:02.416 "num_base_bdevs_discovered": 2, 00:18:02.416 "num_base_bdevs_operational": 2, 00:18:02.416 "process": { 00:18:02.416 "type": "rebuild", 00:18:02.416 "target": "spare", 00:18:02.416 "progress": { 00:18:02.416 "blocks": 2560, 00:18:02.416 "percent": 32 00:18:02.416 } 00:18:02.416 }, 00:18:02.416 "base_bdevs_list": [ 00:18:02.416 { 00:18:02.416 "name": "spare", 00:18:02.416 "uuid": "d25788d2-e281-5b0a-b395-e836ea55b7e2", 00:18:02.416 "is_configured": true, 00:18:02.416 "data_offset": 256, 00:18:02.416 "data_size": 7936 00:18:02.416 }, 00:18:02.416 { 00:18:02.416 "name": "BaseBdev2", 00:18:02.416 "uuid": "c7eb38e7-d485-5cea-b061-401072b51307", 00:18:02.416 "is_configured": true, 00:18:02.416 "data_offset": 256, 00:18:02.416 "data_size": 7936 00:18:02.416 } 00:18:02.416 ] 00:18:02.416 }' 00:18:02.416 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.416 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:02.416 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.416 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:02.416 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:02.416 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:02.416 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:02.416 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:02.416 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:02.416 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:02.416 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=704 00:18:02.416 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:02.417 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.417 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.417 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.417 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.417 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.417 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.417 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.417 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.417 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:02.417 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.417 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.417 "name": "raid_bdev1", 00:18:02.417 "uuid": "bb4a62d1-99ed-43e1-91a6-517d66930bcc", 00:18:02.417 "strip_size_kb": 0, 00:18:02.417 "state": "online", 00:18:02.417 "raid_level": "raid1", 00:18:02.417 "superblock": true, 00:18:02.417 "num_base_bdevs": 2, 00:18:02.417 "num_base_bdevs_discovered": 2, 00:18:02.417 "num_base_bdevs_operational": 2, 00:18:02.417 "process": { 00:18:02.417 "type": "rebuild", 00:18:02.417 "target": "spare", 00:18:02.417 "progress": { 00:18:02.417 "blocks": 2816, 00:18:02.417 "percent": 35 00:18:02.417 } 00:18:02.417 }, 00:18:02.417 "base_bdevs_list": [ 00:18:02.417 { 00:18:02.417 "name": "spare", 00:18:02.417 "uuid": "d25788d2-e281-5b0a-b395-e836ea55b7e2", 00:18:02.417 "is_configured": true, 00:18:02.417 "data_offset": 256, 00:18:02.417 "data_size": 7936 00:18:02.417 }, 00:18:02.417 { 00:18:02.417 "name": "BaseBdev2", 00:18:02.417 "uuid": "c7eb38e7-d485-5cea-b061-401072b51307", 00:18:02.417 "is_configured": true, 00:18:02.417 "data_offset": 256, 00:18:02.417 "data_size": 7936 00:18:02.417 } 00:18:02.417 ] 00:18:02.417 }' 00:18:02.417 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.417 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:02.417 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.417 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:02.417 09:36:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:03.796 09:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:03.796 09:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:03.796 09:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.796 09:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:03.796 09:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:03.796 09:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.796 09:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.796 09:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.796 09:36:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.796 09:36:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:03.796 09:36:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.796 09:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.796 "name": "raid_bdev1", 00:18:03.796 "uuid": "bb4a62d1-99ed-43e1-91a6-517d66930bcc", 00:18:03.796 "strip_size_kb": 0, 00:18:03.796 "state": "online", 00:18:03.796 "raid_level": "raid1", 00:18:03.796 "superblock": true, 00:18:03.796 "num_base_bdevs": 2, 00:18:03.796 "num_base_bdevs_discovered": 2, 00:18:03.796 "num_base_bdevs_operational": 2, 00:18:03.796 "process": { 00:18:03.796 "type": "rebuild", 00:18:03.796 "target": "spare", 00:18:03.796 "progress": { 00:18:03.796 "blocks": 5632, 00:18:03.796 "percent": 70 00:18:03.796 } 00:18:03.796 }, 00:18:03.796 "base_bdevs_list": [ 00:18:03.796 { 00:18:03.796 "name": "spare", 00:18:03.796 "uuid": "d25788d2-e281-5b0a-b395-e836ea55b7e2", 00:18:03.796 "is_configured": true, 00:18:03.796 "data_offset": 256, 00:18:03.796 "data_size": 7936 00:18:03.796 }, 00:18:03.796 { 00:18:03.796 "name": "BaseBdev2", 00:18:03.796 "uuid": "c7eb38e7-d485-5cea-b061-401072b51307", 00:18:03.796 "is_configured": true, 00:18:03.796 "data_offset": 256, 00:18:03.796 "data_size": 7936 00:18:03.796 } 00:18:03.796 ] 00:18:03.796 }' 00:18:03.796 09:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.796 09:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:03.796 09:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.796 09:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:03.796 09:36:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:04.365 [2024-11-15 09:36:52.716368] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:04.365 [2024-11-15 09:36:52.716586] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:04.365 [2024-11-15 09:36:52.716790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.626 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:04.626 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.626 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.626 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.626 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.626 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.626 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.626 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.626 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.626 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.626 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.626 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.626 "name": "raid_bdev1", 00:18:04.626 "uuid": "bb4a62d1-99ed-43e1-91a6-517d66930bcc", 00:18:04.626 "strip_size_kb": 0, 00:18:04.626 "state": "online", 00:18:04.626 "raid_level": "raid1", 00:18:04.626 "superblock": true, 00:18:04.626 "num_base_bdevs": 2, 00:18:04.626 "num_base_bdevs_discovered": 2, 00:18:04.626 "num_base_bdevs_operational": 2, 00:18:04.626 "base_bdevs_list": [ 00:18:04.626 { 00:18:04.626 "name": "spare", 00:18:04.626 "uuid": "d25788d2-e281-5b0a-b395-e836ea55b7e2", 00:18:04.626 "is_configured": true, 00:18:04.626 "data_offset": 256, 00:18:04.626 "data_size": 7936 00:18:04.626 }, 00:18:04.626 { 00:18:04.626 "name": "BaseBdev2", 00:18:04.626 "uuid": "c7eb38e7-d485-5cea-b061-401072b51307", 00:18:04.626 "is_configured": true, 00:18:04.626 "data_offset": 256, 00:18:04.626 "data_size": 7936 00:18:04.626 } 00:18:04.626 ] 00:18:04.626 }' 00:18:04.626 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.885 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:04.885 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.885 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:04.885 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:18:04.885 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:04.885 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.885 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:04.885 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:04.885 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.885 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.885 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.885 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.885 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.885 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.885 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.885 "name": "raid_bdev1", 00:18:04.885 "uuid": "bb4a62d1-99ed-43e1-91a6-517d66930bcc", 00:18:04.885 "strip_size_kb": 0, 00:18:04.885 "state": "online", 00:18:04.885 "raid_level": "raid1", 00:18:04.885 "superblock": true, 00:18:04.885 "num_base_bdevs": 2, 00:18:04.885 "num_base_bdevs_discovered": 2, 00:18:04.885 "num_base_bdevs_operational": 2, 00:18:04.885 "base_bdevs_list": [ 00:18:04.885 { 00:18:04.885 "name": "spare", 00:18:04.885 "uuid": "d25788d2-e281-5b0a-b395-e836ea55b7e2", 00:18:04.885 "is_configured": true, 00:18:04.885 "data_offset": 256, 00:18:04.885 "data_size": 7936 00:18:04.885 }, 00:18:04.885 { 00:18:04.885 "name": "BaseBdev2", 00:18:04.885 "uuid": "c7eb38e7-d485-5cea-b061-401072b51307", 00:18:04.885 "is_configured": true, 00:18:04.885 "data_offset": 256, 00:18:04.885 "data_size": 7936 00:18:04.885 } 00:18:04.885 ] 00:18:04.885 }' 00:18:04.885 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.885 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:04.885 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.886 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:04.886 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:04.886 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.886 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.886 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.886 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.886 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:04.886 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.886 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.886 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.886 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.886 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.886 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.886 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:04.886 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.886 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.886 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.886 "name": "raid_bdev1", 00:18:04.886 "uuid": "bb4a62d1-99ed-43e1-91a6-517d66930bcc", 00:18:04.886 "strip_size_kb": 0, 00:18:04.886 "state": "online", 00:18:04.886 "raid_level": "raid1", 00:18:04.886 "superblock": true, 00:18:04.886 "num_base_bdevs": 2, 00:18:04.886 "num_base_bdevs_discovered": 2, 00:18:04.886 "num_base_bdevs_operational": 2, 00:18:04.886 "base_bdevs_list": [ 00:18:04.886 { 00:18:04.886 "name": "spare", 00:18:04.886 "uuid": "d25788d2-e281-5b0a-b395-e836ea55b7e2", 00:18:04.886 "is_configured": true, 00:18:04.886 "data_offset": 256, 00:18:04.886 "data_size": 7936 00:18:04.886 }, 00:18:04.886 { 00:18:04.886 "name": "BaseBdev2", 00:18:04.886 "uuid": "c7eb38e7-d485-5cea-b061-401072b51307", 00:18:04.886 "is_configured": true, 00:18:04.886 "data_offset": 256, 00:18:04.886 "data_size": 7936 00:18:04.886 } 00:18:04.886 ] 00:18:04.886 }' 00:18:04.886 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.886 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.454 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:05.454 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.454 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.454 [2024-11-15 09:36:53.708940] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:05.454 [2024-11-15 09:36:53.708985] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:05.454 [2024-11-15 09:36:53.709097] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:05.454 [2024-11-15 09:36:53.709180] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:05.454 [2024-11-15 09:36:53.709194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:05.454 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.454 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:18:05.454 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.454 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.454 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:05.454 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.454 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:05.454 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:05.454 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:05.454 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:05.454 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:05.454 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:05.454 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:05.454 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:05.454 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:05.454 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:05.454 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:05.454 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:05.454 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:05.713 /dev/nbd0 00:18:05.713 09:36:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:05.713 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:05.713 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:05.713 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:18:05.713 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:05.713 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:05.713 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:05.713 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:18:05.713 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:05.713 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:05.713 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:05.713 1+0 records in 00:18:05.713 1+0 records out 00:18:05.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290129 s, 14.1 MB/s 00:18:05.713 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:05.713 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:18:05.713 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:05.713 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:05.713 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:18:05.713 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:05.713 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:05.713 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:05.972 /dev/nbd1 00:18:05.972 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:05.972 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:05.972 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:18:05.972 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:18:05.972 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:05.972 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:05.972 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:18:05.972 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:18:05.972 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:05.972 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:05.972 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:05.972 1+0 records in 00:18:05.972 1+0 records out 00:18:05.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470848 s, 8.7 MB/s 00:18:05.972 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:05.972 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:18:05.972 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:05.972 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:05.972 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:18:05.972 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:05.972 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:05.972 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:06.231 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:06.231 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:06.231 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:06.231 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:06.231 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:06.231 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:06.231 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:06.489 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:06.489 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:06.489 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:06.489 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:06.489 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:06.489 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:06.489 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:06.489 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:06.489 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:06.489 09:36:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.748 [2024-11-15 09:36:55.032647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:06.748 [2024-11-15 09:36:55.032742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.748 [2024-11-15 09:36:55.032773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:06.748 [2024-11-15 09:36:55.032784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.748 [2024-11-15 09:36:55.035733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.748 [2024-11-15 09:36:55.035836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:06.748 [2024-11-15 09:36:55.035998] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:06.748 [2024-11-15 09:36:55.036104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:06.748 [2024-11-15 09:36:55.036323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:06.748 spare 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.748 [2024-11-15 09:36:55.136295] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:06.748 [2024-11-15 09:36:55.136502] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:06.748 [2024-11-15 09:36:55.137003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:06.748 [2024-11-15 09:36:55.137321] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:06.748 [2024-11-15 09:36:55.137373] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:06.748 [2024-11-15 09:36:55.137713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.748 "name": "raid_bdev1", 00:18:06.748 "uuid": "bb4a62d1-99ed-43e1-91a6-517d66930bcc", 00:18:06.748 "strip_size_kb": 0, 00:18:06.748 "state": "online", 00:18:06.748 "raid_level": "raid1", 00:18:06.748 "superblock": true, 00:18:06.748 "num_base_bdevs": 2, 00:18:06.748 "num_base_bdevs_discovered": 2, 00:18:06.748 "num_base_bdevs_operational": 2, 00:18:06.748 "base_bdevs_list": [ 00:18:06.748 { 00:18:06.748 "name": "spare", 00:18:06.748 "uuid": "d25788d2-e281-5b0a-b395-e836ea55b7e2", 00:18:06.748 "is_configured": true, 00:18:06.748 "data_offset": 256, 00:18:06.748 "data_size": 7936 00:18:06.748 }, 00:18:06.748 { 00:18:06.748 "name": "BaseBdev2", 00:18:06.748 "uuid": "c7eb38e7-d485-5cea-b061-401072b51307", 00:18:06.748 "is_configured": true, 00:18:06.748 "data_offset": 256, 00:18:06.748 "data_size": 7936 00:18:06.748 } 00:18:06.748 ] 00:18:06.748 }' 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.748 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.316 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:07.316 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.316 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:07.316 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.317 "name": "raid_bdev1", 00:18:07.317 "uuid": "bb4a62d1-99ed-43e1-91a6-517d66930bcc", 00:18:07.317 "strip_size_kb": 0, 00:18:07.317 "state": "online", 00:18:07.317 "raid_level": "raid1", 00:18:07.317 "superblock": true, 00:18:07.317 "num_base_bdevs": 2, 00:18:07.317 "num_base_bdevs_discovered": 2, 00:18:07.317 "num_base_bdevs_operational": 2, 00:18:07.317 "base_bdevs_list": [ 00:18:07.317 { 00:18:07.317 "name": "spare", 00:18:07.317 "uuid": "d25788d2-e281-5b0a-b395-e836ea55b7e2", 00:18:07.317 "is_configured": true, 00:18:07.317 "data_offset": 256, 00:18:07.317 "data_size": 7936 00:18:07.317 }, 00:18:07.317 { 00:18:07.317 "name": "BaseBdev2", 00:18:07.317 "uuid": "c7eb38e7-d485-5cea-b061-401072b51307", 00:18:07.317 "is_configured": true, 00:18:07.317 "data_offset": 256, 00:18:07.317 "data_size": 7936 00:18:07.317 } 00:18:07.317 ] 00:18:07.317 }' 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.317 [2024-11-15 09:36:55.716710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.317 "name": "raid_bdev1", 00:18:07.317 "uuid": "bb4a62d1-99ed-43e1-91a6-517d66930bcc", 00:18:07.317 "strip_size_kb": 0, 00:18:07.317 "state": "online", 00:18:07.317 "raid_level": "raid1", 00:18:07.317 "superblock": true, 00:18:07.317 "num_base_bdevs": 2, 00:18:07.317 "num_base_bdevs_discovered": 1, 00:18:07.317 "num_base_bdevs_operational": 1, 00:18:07.317 "base_bdevs_list": [ 00:18:07.317 { 00:18:07.317 "name": null, 00:18:07.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.317 "is_configured": false, 00:18:07.317 "data_offset": 0, 00:18:07.317 "data_size": 7936 00:18:07.317 }, 00:18:07.317 { 00:18:07.317 "name": "BaseBdev2", 00:18:07.317 "uuid": "c7eb38e7-d485-5cea-b061-401072b51307", 00:18:07.317 "is_configured": true, 00:18:07.317 "data_offset": 256, 00:18:07.317 "data_size": 7936 00:18:07.317 } 00:18:07.317 ] 00:18:07.317 }' 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.317 09:36:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.885 09:36:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:07.885 09:36:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.885 09:36:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.885 [2024-11-15 09:36:56.164085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:07.885 [2024-11-15 09:36:56.164490] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:07.885 [2024-11-15 09:36:56.164567] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:07.885 [2024-11-15 09:36:56.164640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:07.885 [2024-11-15 09:36:56.184730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:07.885 09:36:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.885 09:36:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:07.885 [2024-11-15 09:36:56.187494] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:08.825 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:08.825 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.825 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:08.825 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:08.825 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.825 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.825 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.825 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.825 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.825 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.825 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.825 "name": "raid_bdev1", 00:18:08.825 "uuid": "bb4a62d1-99ed-43e1-91a6-517d66930bcc", 00:18:08.825 "strip_size_kb": 0, 00:18:08.825 "state": "online", 00:18:08.825 "raid_level": "raid1", 00:18:08.825 "superblock": true, 00:18:08.825 "num_base_bdevs": 2, 00:18:08.825 "num_base_bdevs_discovered": 2, 00:18:08.825 "num_base_bdevs_operational": 2, 00:18:08.825 "process": { 00:18:08.825 "type": "rebuild", 00:18:08.825 "target": "spare", 00:18:08.825 "progress": { 00:18:08.825 "blocks": 2560, 00:18:08.825 "percent": 32 00:18:08.825 } 00:18:08.825 }, 00:18:08.825 "base_bdevs_list": [ 00:18:08.825 { 00:18:08.825 "name": "spare", 00:18:08.825 "uuid": "d25788d2-e281-5b0a-b395-e836ea55b7e2", 00:18:08.825 "is_configured": true, 00:18:08.825 "data_offset": 256, 00:18:08.825 "data_size": 7936 00:18:08.825 }, 00:18:08.825 { 00:18:08.825 "name": "BaseBdev2", 00:18:08.825 "uuid": "c7eb38e7-d485-5cea-b061-401072b51307", 00:18:08.825 "is_configured": true, 00:18:08.825 "data_offset": 256, 00:18:08.825 "data_size": 7936 00:18:08.825 } 00:18:08.825 ] 00:18:08.825 }' 00:18:08.825 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.099 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.099 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.099 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:09.099 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:09.099 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.099 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.099 [2024-11-15 09:36:57.346546] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:09.099 [2024-11-15 09:36:57.397983] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:09.099 [2024-11-15 09:36:57.398256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.099 [2024-11-15 09:36:57.398310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:09.099 [2024-11-15 09:36:57.398346] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:09.099 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.099 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:09.099 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.099 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.099 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.099 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.099 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:09.099 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.099 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.099 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.099 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.099 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.099 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.099 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.099 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.099 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.099 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.099 "name": "raid_bdev1", 00:18:09.099 "uuid": "bb4a62d1-99ed-43e1-91a6-517d66930bcc", 00:18:09.099 "strip_size_kb": 0, 00:18:09.099 "state": "online", 00:18:09.099 "raid_level": "raid1", 00:18:09.099 "superblock": true, 00:18:09.099 "num_base_bdevs": 2, 00:18:09.099 "num_base_bdevs_discovered": 1, 00:18:09.099 "num_base_bdevs_operational": 1, 00:18:09.099 "base_bdevs_list": [ 00:18:09.099 { 00:18:09.099 "name": null, 00:18:09.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.099 "is_configured": false, 00:18:09.099 "data_offset": 0, 00:18:09.099 "data_size": 7936 00:18:09.099 }, 00:18:09.099 { 00:18:09.099 "name": "BaseBdev2", 00:18:09.099 "uuid": "c7eb38e7-d485-5cea-b061-401072b51307", 00:18:09.099 "is_configured": true, 00:18:09.099 "data_offset": 256, 00:18:09.099 "data_size": 7936 00:18:09.099 } 00:18:09.099 ] 00:18:09.099 }' 00:18:09.099 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.099 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.683 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:09.683 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.683 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.683 [2024-11-15 09:36:57.921964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:09.683 [2024-11-15 09:36:57.922142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.683 [2024-11-15 09:36:57.922191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:09.683 [2024-11-15 09:36:57.922256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.683 [2024-11-15 09:36:57.922959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.683 [2024-11-15 09:36:57.923043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:09.683 [2024-11-15 09:36:57.923233] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:09.683 [2024-11-15 09:36:57.923292] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:09.683 [2024-11-15 09:36:57.923357] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:09.683 [2024-11-15 09:36:57.923460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:09.683 [2024-11-15 09:36:57.942785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:09.683 spare 00:18:09.683 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.683 09:36:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:09.683 [2024-11-15 09:36:57.945456] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:10.623 09:36:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:10.623 09:36:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.623 09:36:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:10.623 09:36:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:10.623 09:36:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.623 09:36:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.623 09:36:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.623 09:36:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.623 09:36:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.623 09:36:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.623 09:36:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.623 "name": "raid_bdev1", 00:18:10.623 "uuid": "bb4a62d1-99ed-43e1-91a6-517d66930bcc", 00:18:10.623 "strip_size_kb": 0, 00:18:10.623 "state": "online", 00:18:10.623 "raid_level": "raid1", 00:18:10.623 "superblock": true, 00:18:10.623 "num_base_bdevs": 2, 00:18:10.623 "num_base_bdevs_discovered": 2, 00:18:10.623 "num_base_bdevs_operational": 2, 00:18:10.623 "process": { 00:18:10.623 "type": "rebuild", 00:18:10.623 "target": "spare", 00:18:10.623 "progress": { 00:18:10.623 "blocks": 2560, 00:18:10.623 "percent": 32 00:18:10.623 } 00:18:10.623 }, 00:18:10.623 "base_bdevs_list": [ 00:18:10.623 { 00:18:10.623 "name": "spare", 00:18:10.623 "uuid": "d25788d2-e281-5b0a-b395-e836ea55b7e2", 00:18:10.623 "is_configured": true, 00:18:10.623 "data_offset": 256, 00:18:10.623 "data_size": 7936 00:18:10.623 }, 00:18:10.623 { 00:18:10.623 "name": "BaseBdev2", 00:18:10.623 "uuid": "c7eb38e7-d485-5cea-b061-401072b51307", 00:18:10.623 "is_configured": true, 00:18:10.623 "data_offset": 256, 00:18:10.623 "data_size": 7936 00:18:10.623 } 00:18:10.623 ] 00:18:10.623 }' 00:18:10.623 09:36:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.623 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:10.623 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.883 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:10.883 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:10.883 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.883 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.883 [2024-11-15 09:36:59.101127] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:10.883 [2024-11-15 09:36:59.155916] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:10.883 [2024-11-15 09:36:59.156003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.883 [2024-11-15 09:36:59.156024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:10.883 [2024-11-15 09:36:59.156032] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:10.883 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.883 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:10.883 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.884 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.884 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.884 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.884 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:10.884 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.884 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.884 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.884 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.884 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.884 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.884 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.884 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.884 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.884 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.884 "name": "raid_bdev1", 00:18:10.884 "uuid": "bb4a62d1-99ed-43e1-91a6-517d66930bcc", 00:18:10.884 "strip_size_kb": 0, 00:18:10.884 "state": "online", 00:18:10.884 "raid_level": "raid1", 00:18:10.884 "superblock": true, 00:18:10.884 "num_base_bdevs": 2, 00:18:10.884 "num_base_bdevs_discovered": 1, 00:18:10.884 "num_base_bdevs_operational": 1, 00:18:10.884 "base_bdevs_list": [ 00:18:10.884 { 00:18:10.884 "name": null, 00:18:10.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.884 "is_configured": false, 00:18:10.884 "data_offset": 0, 00:18:10.884 "data_size": 7936 00:18:10.884 }, 00:18:10.884 { 00:18:10.884 "name": "BaseBdev2", 00:18:10.884 "uuid": "c7eb38e7-d485-5cea-b061-401072b51307", 00:18:10.884 "is_configured": true, 00:18:10.884 "data_offset": 256, 00:18:10.884 "data_size": 7936 00:18:10.884 } 00:18:10.884 ] 00:18:10.884 }' 00:18:10.884 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.884 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.455 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:11.455 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.455 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:11.455 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:11.455 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.455 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.455 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.455 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.455 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.455 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.455 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.455 "name": "raid_bdev1", 00:18:11.455 "uuid": "bb4a62d1-99ed-43e1-91a6-517d66930bcc", 00:18:11.455 "strip_size_kb": 0, 00:18:11.455 "state": "online", 00:18:11.455 "raid_level": "raid1", 00:18:11.455 "superblock": true, 00:18:11.455 "num_base_bdevs": 2, 00:18:11.455 "num_base_bdevs_discovered": 1, 00:18:11.455 "num_base_bdevs_operational": 1, 00:18:11.455 "base_bdevs_list": [ 00:18:11.455 { 00:18:11.455 "name": null, 00:18:11.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.455 "is_configured": false, 00:18:11.455 "data_offset": 0, 00:18:11.455 "data_size": 7936 00:18:11.455 }, 00:18:11.455 { 00:18:11.455 "name": "BaseBdev2", 00:18:11.455 "uuid": "c7eb38e7-d485-5cea-b061-401072b51307", 00:18:11.455 "is_configured": true, 00:18:11.455 "data_offset": 256, 00:18:11.455 "data_size": 7936 00:18:11.455 } 00:18:11.455 ] 00:18:11.455 }' 00:18:11.455 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.455 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:11.455 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.455 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:11.455 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:11.455 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.455 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.455 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.455 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:11.455 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.455 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.455 [2024-11-15 09:36:59.795860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:11.455 [2024-11-15 09:36:59.795944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.455 [2024-11-15 09:36:59.795975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:11.455 [2024-11-15 09:36:59.795997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.455 [2024-11-15 09:36:59.796616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.455 [2024-11-15 09:36:59.796638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:11.455 [2024-11-15 09:36:59.796748] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:11.455 [2024-11-15 09:36:59.796766] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:11.455 [2024-11-15 09:36:59.796779] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:11.455 [2024-11-15 09:36:59.796792] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:11.455 BaseBdev1 00:18:11.455 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.455 09:36:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:12.393 09:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:12.393 09:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.393 09:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.393 09:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.393 09:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.393 09:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:12.393 09:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.393 09:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.393 09:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.393 09:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.393 09:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.393 09:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.393 09:37:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.393 09:37:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.393 09:37:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.650 09:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.650 "name": "raid_bdev1", 00:18:12.650 "uuid": "bb4a62d1-99ed-43e1-91a6-517d66930bcc", 00:18:12.650 "strip_size_kb": 0, 00:18:12.650 "state": "online", 00:18:12.650 "raid_level": "raid1", 00:18:12.650 "superblock": true, 00:18:12.650 "num_base_bdevs": 2, 00:18:12.650 "num_base_bdevs_discovered": 1, 00:18:12.650 "num_base_bdevs_operational": 1, 00:18:12.650 "base_bdevs_list": [ 00:18:12.650 { 00:18:12.650 "name": null, 00:18:12.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.650 "is_configured": false, 00:18:12.650 "data_offset": 0, 00:18:12.650 "data_size": 7936 00:18:12.650 }, 00:18:12.650 { 00:18:12.650 "name": "BaseBdev2", 00:18:12.650 "uuid": "c7eb38e7-d485-5cea-b061-401072b51307", 00:18:12.650 "is_configured": true, 00:18:12.650 "data_offset": 256, 00:18:12.650 "data_size": 7936 00:18:12.650 } 00:18:12.650 ] 00:18:12.650 }' 00:18:12.650 09:37:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.650 09:37:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.907 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:12.907 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.907 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:12.907 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:12.907 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.907 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.907 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.907 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.907 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.907 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.166 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.166 "name": "raid_bdev1", 00:18:13.166 "uuid": "bb4a62d1-99ed-43e1-91a6-517d66930bcc", 00:18:13.166 "strip_size_kb": 0, 00:18:13.166 "state": "online", 00:18:13.166 "raid_level": "raid1", 00:18:13.166 "superblock": true, 00:18:13.166 "num_base_bdevs": 2, 00:18:13.166 "num_base_bdevs_discovered": 1, 00:18:13.166 "num_base_bdevs_operational": 1, 00:18:13.166 "base_bdevs_list": [ 00:18:13.166 { 00:18:13.166 "name": null, 00:18:13.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.166 "is_configured": false, 00:18:13.166 "data_offset": 0, 00:18:13.166 "data_size": 7936 00:18:13.166 }, 00:18:13.166 { 00:18:13.166 "name": "BaseBdev2", 00:18:13.166 "uuid": "c7eb38e7-d485-5cea-b061-401072b51307", 00:18:13.166 "is_configured": true, 00:18:13.166 "data_offset": 256, 00:18:13.166 "data_size": 7936 00:18:13.166 } 00:18:13.166 ] 00:18:13.166 }' 00:18:13.166 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.166 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:13.166 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.166 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:13.166 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:13.166 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:18:13.166 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:13.166 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:13.166 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:13.166 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:13.166 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:13.166 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:13.166 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.166 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.166 [2024-11-15 09:37:01.493079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:13.166 [2024-11-15 09:37:01.493387] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:13.166 [2024-11-15 09:37:01.493414] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:13.166 request: 00:18:13.166 { 00:18:13.166 "base_bdev": "BaseBdev1", 00:18:13.166 "raid_bdev": "raid_bdev1", 00:18:13.166 "method": "bdev_raid_add_base_bdev", 00:18:13.166 "req_id": 1 00:18:13.166 } 00:18:13.166 Got JSON-RPC error response 00:18:13.166 response: 00:18:13.166 { 00:18:13.166 "code": -22, 00:18:13.166 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:13.166 } 00:18:13.166 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:13.166 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:18:13.166 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:13.166 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:13.166 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:13.166 09:37:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:14.105 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:14.105 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.105 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.105 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.105 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.105 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:14.105 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.105 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.105 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.105 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.105 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.105 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.105 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.105 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.105 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.105 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.105 "name": "raid_bdev1", 00:18:14.105 "uuid": "bb4a62d1-99ed-43e1-91a6-517d66930bcc", 00:18:14.105 "strip_size_kb": 0, 00:18:14.105 "state": "online", 00:18:14.105 "raid_level": "raid1", 00:18:14.105 "superblock": true, 00:18:14.105 "num_base_bdevs": 2, 00:18:14.105 "num_base_bdevs_discovered": 1, 00:18:14.105 "num_base_bdevs_operational": 1, 00:18:14.105 "base_bdevs_list": [ 00:18:14.105 { 00:18:14.105 "name": null, 00:18:14.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.105 "is_configured": false, 00:18:14.105 "data_offset": 0, 00:18:14.105 "data_size": 7936 00:18:14.105 }, 00:18:14.105 { 00:18:14.105 "name": "BaseBdev2", 00:18:14.105 "uuid": "c7eb38e7-d485-5cea-b061-401072b51307", 00:18:14.105 "is_configured": true, 00:18:14.105 "data_offset": 256, 00:18:14.105 "data_size": 7936 00:18:14.105 } 00:18:14.105 ] 00:18:14.105 }' 00:18:14.105 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.105 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.673 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:14.673 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.673 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:14.673 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:14.673 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.673 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.673 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.673 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.673 09:37:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.673 09:37:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.673 09:37:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.673 "name": "raid_bdev1", 00:18:14.673 "uuid": "bb4a62d1-99ed-43e1-91a6-517d66930bcc", 00:18:14.673 "strip_size_kb": 0, 00:18:14.673 "state": "online", 00:18:14.673 "raid_level": "raid1", 00:18:14.673 "superblock": true, 00:18:14.673 "num_base_bdevs": 2, 00:18:14.673 "num_base_bdevs_discovered": 1, 00:18:14.673 "num_base_bdevs_operational": 1, 00:18:14.673 "base_bdevs_list": [ 00:18:14.673 { 00:18:14.673 "name": null, 00:18:14.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.673 "is_configured": false, 00:18:14.673 "data_offset": 0, 00:18:14.673 "data_size": 7936 00:18:14.673 }, 00:18:14.673 { 00:18:14.673 "name": "BaseBdev2", 00:18:14.673 "uuid": "c7eb38e7-d485-5cea-b061-401072b51307", 00:18:14.673 "is_configured": true, 00:18:14.673 "data_offset": 256, 00:18:14.673 "data_size": 7936 00:18:14.673 } 00:18:14.673 ] 00:18:14.673 }' 00:18:14.673 09:37:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.673 09:37:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:14.673 09:37:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.933 09:37:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:14.933 09:37:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86947 00:18:14.933 09:37:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 86947 ']' 00:18:14.933 09:37:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 86947 00:18:14.933 09:37:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:18:14.933 09:37:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:14.933 09:37:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86947 00:18:14.933 09:37:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:14.933 killing process with pid 86947 00:18:14.933 Received shutdown signal, test time was about 60.000000 seconds 00:18:14.933 00:18:14.933 Latency(us) 00:18:14.933 [2024-11-15T09:37:03.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.933 [2024-11-15T09:37:03.396Z] =================================================================================================================== 00:18:14.933 [2024-11-15T09:37:03.396Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:14.933 09:37:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:14.933 09:37:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86947' 00:18:14.933 09:37:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@971 -- # kill 86947 00:18:14.933 [2024-11-15 09:37:03.176794] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:14.933 09:37:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@976 -- # wait 86947 00:18:14.933 [2024-11-15 09:37:03.176973] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.933 [2024-11-15 09:37:03.177035] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:14.933 [2024-11-15 09:37:03.177047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:15.193 [2024-11-15 09:37:03.476523] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:16.131 09:37:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:18:16.131 ************************************ 00:18:16.131 END TEST raid_rebuild_test_sb_4k 00:18:16.131 ************************************ 00:18:16.131 00:18:16.131 real 0m20.270s 00:18:16.131 user 0m26.302s 00:18:16.131 sys 0m2.962s 00:18:16.131 09:37:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:16.131 09:37:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.395 09:37:04 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:18:16.395 09:37:04 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:18:16.395 09:37:04 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:16.395 09:37:04 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:16.395 09:37:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:16.395 ************************************ 00:18:16.395 START TEST raid_state_function_test_sb_md_separate 00:18:16.395 ************************************ 00:18:16.395 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:18:16.395 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:16.395 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:16.395 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:16.395 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:16.395 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:16.395 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:16.395 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:16.395 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:16.395 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:16.395 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:16.395 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:16.395 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:16.395 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:16.395 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:16.395 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:16.396 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:16.396 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:16.396 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:16.396 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:16.396 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:16.396 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:16.396 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:16.396 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87639 00:18:16.396 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:16.396 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87639' 00:18:16.396 Process raid pid: 87639 00:18:16.396 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87639 00:18:16.396 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87639 ']' 00:18:16.396 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.396 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:16.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.396 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.396 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:16.396 09:37:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.396 [2024-11-15 09:37:04.727539] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:18:16.396 [2024-11-15 09:37:04.727655] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.658 [2024-11-15 09:37:04.902826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.658 [2024-11-15 09:37:05.016587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.917 [2024-11-15 09:37:05.215001] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:16.917 [2024-11-15 09:37:05.215047] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:17.176 09:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:17.176 09:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:18:17.176 09:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:17.176 09:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.176 09:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.176 [2024-11-15 09:37:05.583984] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:17.176 [2024-11-15 09:37:05.584157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:17.176 [2024-11-15 09:37:05.584172] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:17.176 [2024-11-15 09:37:05.584182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:17.176 09:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.176 09:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:17.176 09:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.176 09:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:17.176 09:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.176 09:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.176 09:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:17.176 09:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.176 09:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.176 09:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.176 09:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.176 09:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.176 09:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.176 09:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.176 09:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.176 09:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.435 09:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.435 "name": "Existed_Raid", 00:18:17.435 "uuid": "7849ff09-70c5-47d3-a30c-2b4f9e977f32", 00:18:17.435 "strip_size_kb": 0, 00:18:17.435 "state": "configuring", 00:18:17.435 "raid_level": "raid1", 00:18:17.435 "superblock": true, 00:18:17.435 "num_base_bdevs": 2, 00:18:17.435 "num_base_bdevs_discovered": 0, 00:18:17.435 "num_base_bdevs_operational": 2, 00:18:17.435 "base_bdevs_list": [ 00:18:17.435 { 00:18:17.435 "name": "BaseBdev1", 00:18:17.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.435 "is_configured": false, 00:18:17.435 "data_offset": 0, 00:18:17.435 "data_size": 0 00:18:17.435 }, 00:18:17.436 { 00:18:17.436 "name": "BaseBdev2", 00:18:17.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.436 "is_configured": false, 00:18:17.436 "data_offset": 0, 00:18:17.436 "data_size": 0 00:18:17.436 } 00:18:17.436 ] 00:18:17.436 }' 00:18:17.436 09:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.436 09:37:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.695 [2024-11-15 09:37:06.019135] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:17.695 [2024-11-15 09:37:06.019235] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.695 [2024-11-15 09:37:06.031110] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:17.695 [2024-11-15 09:37:06.031189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:17.695 [2024-11-15 09:37:06.031215] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:17.695 [2024-11-15 09:37:06.031238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.695 [2024-11-15 09:37:06.080712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:17.695 BaseBdev1 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.695 [ 00:18:17.695 { 00:18:17.695 "name": "BaseBdev1", 00:18:17.695 "aliases": [ 00:18:17.695 "a1d16e6b-1c1a-45b7-a254-e8ca73d69619" 00:18:17.695 ], 00:18:17.695 "product_name": "Malloc disk", 00:18:17.695 "block_size": 4096, 00:18:17.695 "num_blocks": 8192, 00:18:17.695 "uuid": "a1d16e6b-1c1a-45b7-a254-e8ca73d69619", 00:18:17.695 "md_size": 32, 00:18:17.695 "md_interleave": false, 00:18:17.695 "dif_type": 0, 00:18:17.695 "assigned_rate_limits": { 00:18:17.695 "rw_ios_per_sec": 0, 00:18:17.695 "rw_mbytes_per_sec": 0, 00:18:17.695 "r_mbytes_per_sec": 0, 00:18:17.695 "w_mbytes_per_sec": 0 00:18:17.695 }, 00:18:17.695 "claimed": true, 00:18:17.695 "claim_type": "exclusive_write", 00:18:17.695 "zoned": false, 00:18:17.695 "supported_io_types": { 00:18:17.695 "read": true, 00:18:17.695 "write": true, 00:18:17.695 "unmap": true, 00:18:17.695 "flush": true, 00:18:17.695 "reset": true, 00:18:17.695 "nvme_admin": false, 00:18:17.695 "nvme_io": false, 00:18:17.695 "nvme_io_md": false, 00:18:17.695 "write_zeroes": true, 00:18:17.695 "zcopy": true, 00:18:17.695 "get_zone_info": false, 00:18:17.695 "zone_management": false, 00:18:17.695 "zone_append": false, 00:18:17.695 "compare": false, 00:18:17.695 "compare_and_write": false, 00:18:17.695 "abort": true, 00:18:17.695 "seek_hole": false, 00:18:17.695 "seek_data": false, 00:18:17.695 "copy": true, 00:18:17.695 "nvme_iov_md": false 00:18:17.695 }, 00:18:17.695 "memory_domains": [ 00:18:17.695 { 00:18:17.695 "dma_device_id": "system", 00:18:17.695 "dma_device_type": 1 00:18:17.695 }, 00:18:17.695 { 00:18:17.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.695 "dma_device_type": 2 00:18:17.695 } 00:18:17.695 ], 00:18:17.695 "driver_specific": {} 00:18:17.695 } 00:18:17.695 ] 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.695 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.954 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.954 "name": "Existed_Raid", 00:18:17.954 "uuid": "c4560345-ff0b-4503-89d5-154f3308e3ad", 00:18:17.954 "strip_size_kb": 0, 00:18:17.954 "state": "configuring", 00:18:17.954 "raid_level": "raid1", 00:18:17.954 "superblock": true, 00:18:17.954 "num_base_bdevs": 2, 00:18:17.954 "num_base_bdevs_discovered": 1, 00:18:17.954 "num_base_bdevs_operational": 2, 00:18:17.954 "base_bdevs_list": [ 00:18:17.954 { 00:18:17.954 "name": "BaseBdev1", 00:18:17.954 "uuid": "a1d16e6b-1c1a-45b7-a254-e8ca73d69619", 00:18:17.954 "is_configured": true, 00:18:17.954 "data_offset": 256, 00:18:17.954 "data_size": 7936 00:18:17.954 }, 00:18:17.954 { 00:18:17.954 "name": "BaseBdev2", 00:18:17.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.954 "is_configured": false, 00:18:17.954 "data_offset": 0, 00:18:17.954 "data_size": 0 00:18:17.954 } 00:18:17.954 ] 00:18:17.954 }' 00:18:17.954 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.954 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.213 [2024-11-15 09:37:06.568067] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:18.213 [2024-11-15 09:37:06.568127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.213 [2024-11-15 09:37:06.580102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:18.213 [2024-11-15 09:37:06.581994] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:18.213 [2024-11-15 09:37:06.582112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.213 "name": "Existed_Raid", 00:18:18.213 "uuid": "69c0139c-ddaa-4874-8ef1-cb9a559c4aed", 00:18:18.213 "strip_size_kb": 0, 00:18:18.213 "state": "configuring", 00:18:18.213 "raid_level": "raid1", 00:18:18.213 "superblock": true, 00:18:18.213 "num_base_bdevs": 2, 00:18:18.213 "num_base_bdevs_discovered": 1, 00:18:18.213 "num_base_bdevs_operational": 2, 00:18:18.213 "base_bdevs_list": [ 00:18:18.213 { 00:18:18.213 "name": "BaseBdev1", 00:18:18.213 "uuid": "a1d16e6b-1c1a-45b7-a254-e8ca73d69619", 00:18:18.213 "is_configured": true, 00:18:18.213 "data_offset": 256, 00:18:18.213 "data_size": 7936 00:18:18.213 }, 00:18:18.213 { 00:18:18.213 "name": "BaseBdev2", 00:18:18.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.213 "is_configured": false, 00:18:18.213 "data_offset": 0, 00:18:18.213 "data_size": 0 00:18:18.213 } 00:18:18.213 ] 00:18:18.213 }' 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.213 09:37:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.804 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:18:18.804 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.804 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.804 [2024-11-15 09:37:07.062041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:18.804 [2024-11-15 09:37:07.062369] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:18.804 [2024-11-15 09:37:07.062417] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:18.804 [2024-11-15 09:37:07.062520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:18.804 [2024-11-15 09:37:07.062687] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:18.804 [2024-11-15 09:37:07.062726] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:18.804 [2024-11-15 09:37:07.062882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.804 BaseBdev2 00:18:18.804 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.804 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:18.804 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:18.804 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:18.804 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:18:18.804 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:18.804 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:18.804 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:18.804 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.804 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.804 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.804 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:18.804 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.804 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.804 [ 00:18:18.804 { 00:18:18.804 "name": "BaseBdev2", 00:18:18.804 "aliases": [ 00:18:18.804 "2fa1adc6-50b9-4ecb-bdba-6ec0063faf80" 00:18:18.804 ], 00:18:18.804 "product_name": "Malloc disk", 00:18:18.804 "block_size": 4096, 00:18:18.804 "num_blocks": 8192, 00:18:18.804 "uuid": "2fa1adc6-50b9-4ecb-bdba-6ec0063faf80", 00:18:18.804 "md_size": 32, 00:18:18.804 "md_interleave": false, 00:18:18.804 "dif_type": 0, 00:18:18.804 "assigned_rate_limits": { 00:18:18.804 "rw_ios_per_sec": 0, 00:18:18.804 "rw_mbytes_per_sec": 0, 00:18:18.804 "r_mbytes_per_sec": 0, 00:18:18.804 "w_mbytes_per_sec": 0 00:18:18.804 }, 00:18:18.805 "claimed": true, 00:18:18.805 "claim_type": "exclusive_write", 00:18:18.805 "zoned": false, 00:18:18.805 "supported_io_types": { 00:18:18.805 "read": true, 00:18:18.805 "write": true, 00:18:18.805 "unmap": true, 00:18:18.805 "flush": true, 00:18:18.805 "reset": true, 00:18:18.805 "nvme_admin": false, 00:18:18.805 "nvme_io": false, 00:18:18.805 "nvme_io_md": false, 00:18:18.805 "write_zeroes": true, 00:18:18.805 "zcopy": true, 00:18:18.805 "get_zone_info": false, 00:18:18.805 "zone_management": false, 00:18:18.805 "zone_append": false, 00:18:18.805 "compare": false, 00:18:18.805 "compare_and_write": false, 00:18:18.805 "abort": true, 00:18:18.805 "seek_hole": false, 00:18:18.805 "seek_data": false, 00:18:18.805 "copy": true, 00:18:18.805 "nvme_iov_md": false 00:18:18.805 }, 00:18:18.805 "memory_domains": [ 00:18:18.805 { 00:18:18.805 "dma_device_id": "system", 00:18:18.805 "dma_device_type": 1 00:18:18.805 }, 00:18:18.805 { 00:18:18.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.805 "dma_device_type": 2 00:18:18.805 } 00:18:18.805 ], 00:18:18.805 "driver_specific": {} 00:18:18.805 } 00:18:18.805 ] 00:18:18.805 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.805 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:18:18.805 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:18.805 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:18.805 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:18.805 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:18.805 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.805 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.805 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.805 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:18.805 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.805 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.805 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.805 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.805 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.805 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.805 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.805 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.805 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.805 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.805 "name": "Existed_Raid", 00:18:18.805 "uuid": "69c0139c-ddaa-4874-8ef1-cb9a559c4aed", 00:18:18.805 "strip_size_kb": 0, 00:18:18.805 "state": "online", 00:18:18.805 "raid_level": "raid1", 00:18:18.805 "superblock": true, 00:18:18.805 "num_base_bdevs": 2, 00:18:18.805 "num_base_bdevs_discovered": 2, 00:18:18.805 "num_base_bdevs_operational": 2, 00:18:18.805 "base_bdevs_list": [ 00:18:18.805 { 00:18:18.805 "name": "BaseBdev1", 00:18:18.805 "uuid": "a1d16e6b-1c1a-45b7-a254-e8ca73d69619", 00:18:18.805 "is_configured": true, 00:18:18.805 "data_offset": 256, 00:18:18.805 "data_size": 7936 00:18:18.805 }, 00:18:18.805 { 00:18:18.805 "name": "BaseBdev2", 00:18:18.805 "uuid": "2fa1adc6-50b9-4ecb-bdba-6ec0063faf80", 00:18:18.805 "is_configured": true, 00:18:18.805 "data_offset": 256, 00:18:18.805 "data_size": 7936 00:18:18.805 } 00:18:18.805 ] 00:18:18.805 }' 00:18:18.805 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.805 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.064 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:19.064 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:19.064 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:19.064 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:19.064 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:19.064 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:19.064 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:19.064 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.064 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.064 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:19.064 [2024-11-15 09:37:07.521715] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:19.338 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.338 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:19.338 "name": "Existed_Raid", 00:18:19.338 "aliases": [ 00:18:19.338 "69c0139c-ddaa-4874-8ef1-cb9a559c4aed" 00:18:19.338 ], 00:18:19.338 "product_name": "Raid Volume", 00:18:19.338 "block_size": 4096, 00:18:19.338 "num_blocks": 7936, 00:18:19.338 "uuid": "69c0139c-ddaa-4874-8ef1-cb9a559c4aed", 00:18:19.338 "md_size": 32, 00:18:19.338 "md_interleave": false, 00:18:19.338 "dif_type": 0, 00:18:19.338 "assigned_rate_limits": { 00:18:19.338 "rw_ios_per_sec": 0, 00:18:19.338 "rw_mbytes_per_sec": 0, 00:18:19.338 "r_mbytes_per_sec": 0, 00:18:19.338 "w_mbytes_per_sec": 0 00:18:19.338 }, 00:18:19.338 "claimed": false, 00:18:19.338 "zoned": false, 00:18:19.338 "supported_io_types": { 00:18:19.338 "read": true, 00:18:19.338 "write": true, 00:18:19.338 "unmap": false, 00:18:19.338 "flush": false, 00:18:19.338 "reset": true, 00:18:19.338 "nvme_admin": false, 00:18:19.338 "nvme_io": false, 00:18:19.338 "nvme_io_md": false, 00:18:19.338 "write_zeroes": true, 00:18:19.338 "zcopy": false, 00:18:19.338 "get_zone_info": false, 00:18:19.338 "zone_management": false, 00:18:19.338 "zone_append": false, 00:18:19.338 "compare": false, 00:18:19.338 "compare_and_write": false, 00:18:19.338 "abort": false, 00:18:19.338 "seek_hole": false, 00:18:19.338 "seek_data": false, 00:18:19.338 "copy": false, 00:18:19.338 "nvme_iov_md": false 00:18:19.338 }, 00:18:19.338 "memory_domains": [ 00:18:19.338 { 00:18:19.338 "dma_device_id": "system", 00:18:19.338 "dma_device_type": 1 00:18:19.338 }, 00:18:19.338 { 00:18:19.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.338 "dma_device_type": 2 00:18:19.338 }, 00:18:19.338 { 00:18:19.338 "dma_device_id": "system", 00:18:19.338 "dma_device_type": 1 00:18:19.338 }, 00:18:19.338 { 00:18:19.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.338 "dma_device_type": 2 00:18:19.338 } 00:18:19.338 ], 00:18:19.338 "driver_specific": { 00:18:19.338 "raid": { 00:18:19.338 "uuid": "69c0139c-ddaa-4874-8ef1-cb9a559c4aed", 00:18:19.338 "strip_size_kb": 0, 00:18:19.338 "state": "online", 00:18:19.338 "raid_level": "raid1", 00:18:19.338 "superblock": true, 00:18:19.338 "num_base_bdevs": 2, 00:18:19.338 "num_base_bdevs_discovered": 2, 00:18:19.338 "num_base_bdevs_operational": 2, 00:18:19.338 "base_bdevs_list": [ 00:18:19.338 { 00:18:19.338 "name": "BaseBdev1", 00:18:19.338 "uuid": "a1d16e6b-1c1a-45b7-a254-e8ca73d69619", 00:18:19.338 "is_configured": true, 00:18:19.338 "data_offset": 256, 00:18:19.338 "data_size": 7936 00:18:19.338 }, 00:18:19.338 { 00:18:19.338 "name": "BaseBdev2", 00:18:19.338 "uuid": "2fa1adc6-50b9-4ecb-bdba-6ec0063faf80", 00:18:19.338 "is_configured": true, 00:18:19.338 "data_offset": 256, 00:18:19.338 "data_size": 7936 00:18:19.338 } 00:18:19.338 ] 00:18:19.338 } 00:18:19.338 } 00:18:19.338 }' 00:18:19.339 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:19.339 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:19.339 BaseBdev2' 00:18:19.339 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.339 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:19.339 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.339 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.339 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:19.339 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.339 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.339 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.339 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:19.339 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:19.339 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.339 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:19.339 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.339 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.339 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.339 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.339 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:19.339 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:19.339 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:19.339 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.339 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.339 [2024-11-15 09:37:07.737060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:19.598 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.598 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:19.598 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:19.598 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:19.598 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:19.598 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:19.598 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:19.598 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:19.598 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.598 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.598 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.598 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:19.598 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.598 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.598 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.598 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.598 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.598 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.598 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.598 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.598 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.598 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.598 "name": "Existed_Raid", 00:18:19.598 "uuid": "69c0139c-ddaa-4874-8ef1-cb9a559c4aed", 00:18:19.598 "strip_size_kb": 0, 00:18:19.598 "state": "online", 00:18:19.598 "raid_level": "raid1", 00:18:19.598 "superblock": true, 00:18:19.598 "num_base_bdevs": 2, 00:18:19.598 "num_base_bdevs_discovered": 1, 00:18:19.598 "num_base_bdevs_operational": 1, 00:18:19.598 "base_bdevs_list": [ 00:18:19.598 { 00:18:19.598 "name": null, 00:18:19.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.599 "is_configured": false, 00:18:19.599 "data_offset": 0, 00:18:19.599 "data_size": 7936 00:18:19.599 }, 00:18:19.599 { 00:18:19.599 "name": "BaseBdev2", 00:18:19.599 "uuid": "2fa1adc6-50b9-4ecb-bdba-6ec0063faf80", 00:18:19.599 "is_configured": true, 00:18:19.599 "data_offset": 256, 00:18:19.599 "data_size": 7936 00:18:19.599 } 00:18:19.599 ] 00:18:19.599 }' 00:18:19.599 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.599 09:37:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.858 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:19.858 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:19.858 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.858 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.858 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.858 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:19.858 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.117 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:20.117 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:20.117 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:20.117 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.117 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.117 [2024-11-15 09:37:08.332809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:20.117 [2024-11-15 09:37:08.332944] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:20.117 [2024-11-15 09:37:08.435767] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.117 [2024-11-15 09:37:08.435914] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:20.117 [2024-11-15 09:37:08.435934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:20.117 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.117 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:20.117 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:20.117 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.117 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.117 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.117 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:20.117 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.117 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:20.117 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:20.117 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:20.117 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87639 00:18:20.117 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87639 ']' 00:18:20.117 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 87639 00:18:20.117 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:18:20.118 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:20.118 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87639 00:18:20.118 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:20.118 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:20.118 killing process with pid 87639 00:18:20.118 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87639' 00:18:20.118 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 87639 00:18:20.118 [2024-11-15 09:37:08.532549] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:20.118 09:37:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 87639 00:18:20.118 [2024-11-15 09:37:08.548990] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:21.498 09:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:18:21.498 00:18:21.498 real 0m4.993s 00:18:21.498 user 0m7.149s 00:18:21.498 sys 0m0.869s 00:18:21.498 09:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:21.498 ************************************ 00:18:21.498 END TEST raid_state_function_test_sb_md_separate 00:18:21.498 ************************************ 00:18:21.498 09:37:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.498 09:37:09 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:21.498 09:37:09 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:21.498 09:37:09 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:21.498 09:37:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:21.498 ************************************ 00:18:21.498 START TEST raid_superblock_test_md_separate 00:18:21.498 ************************************ 00:18:21.498 09:37:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:18:21.498 09:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:21.498 09:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:21.498 09:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:21.498 09:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:21.498 09:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:21.498 09:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:21.498 09:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:21.498 09:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:21.498 09:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:21.498 09:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:21.498 09:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:21.498 09:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:21.498 09:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:21.498 09:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:21.498 09:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:21.498 09:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87891 00:18:21.498 09:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87891 00:18:21.498 09:37:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:21.498 09:37:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87891 ']' 00:18:21.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.498 09:37:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.498 09:37:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:21.498 09:37:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.498 09:37:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:21.498 09:37:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:21.498 [2024-11-15 09:37:09.795721] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:18:21.498 [2024-11-15 09:37:09.795905] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87891 ] 00:18:21.760 [2024-11-15 09:37:09.968256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.760 [2024-11-15 09:37:10.081378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.020 [2024-11-15 09:37:10.282361] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:22.020 [2024-11-15 09:37:10.282410] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@866 -- # return 0 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.279 malloc1 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.279 [2024-11-15 09:37:10.674908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:22.279 [2024-11-15 09:37:10.675015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.279 [2024-11-15 09:37:10.675056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:22.279 [2024-11-15 09:37:10.675092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.279 [2024-11-15 09:37:10.677112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.279 [2024-11-15 09:37:10.677184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:22.279 pt1 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.279 malloc2 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.279 [2024-11-15 09:37:10.734541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:22.279 [2024-11-15 09:37:10.734602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.279 [2024-11-15 09:37:10.734622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:22.279 [2024-11-15 09:37:10.734631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.279 [2024-11-15 09:37:10.736509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.279 [2024-11-15 09:37:10.736546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:22.279 pt2 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.279 09:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.539 [2024-11-15 09:37:10.746550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:22.539 [2024-11-15 09:37:10.748447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:22.539 [2024-11-15 09:37:10.748644] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:22.539 [2024-11-15 09:37:10.748659] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:22.539 [2024-11-15 09:37:10.748747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:22.539 [2024-11-15 09:37:10.748901] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:22.539 [2024-11-15 09:37:10.748914] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:22.539 [2024-11-15 09:37:10.749043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.539 09:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.539 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:22.539 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.539 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.539 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.539 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.539 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:22.539 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.539 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.539 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.539 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.539 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.539 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.539 09:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.539 09:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.539 09:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.539 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.539 "name": "raid_bdev1", 00:18:22.539 "uuid": "741d874f-f2f3-4794-a0cf-9a1ac0916140", 00:18:22.539 "strip_size_kb": 0, 00:18:22.539 "state": "online", 00:18:22.539 "raid_level": "raid1", 00:18:22.539 "superblock": true, 00:18:22.539 "num_base_bdevs": 2, 00:18:22.539 "num_base_bdevs_discovered": 2, 00:18:22.539 "num_base_bdevs_operational": 2, 00:18:22.539 "base_bdevs_list": [ 00:18:22.539 { 00:18:22.539 "name": "pt1", 00:18:22.539 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:22.539 "is_configured": true, 00:18:22.539 "data_offset": 256, 00:18:22.539 "data_size": 7936 00:18:22.539 }, 00:18:22.539 { 00:18:22.539 "name": "pt2", 00:18:22.539 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:22.539 "is_configured": true, 00:18:22.539 "data_offset": 256, 00:18:22.539 "data_size": 7936 00:18:22.539 } 00:18:22.539 ] 00:18:22.539 }' 00:18:22.539 09:37:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.539 09:37:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.799 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:22.799 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:22.799 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:22.799 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:22.799 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:22.799 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:22.799 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:22.799 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.799 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:22.799 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:22.799 [2024-11-15 09:37:11.218043] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:22.799 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.799 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:22.799 "name": "raid_bdev1", 00:18:22.799 "aliases": [ 00:18:22.799 "741d874f-f2f3-4794-a0cf-9a1ac0916140" 00:18:22.799 ], 00:18:22.799 "product_name": "Raid Volume", 00:18:22.799 "block_size": 4096, 00:18:22.799 "num_blocks": 7936, 00:18:22.799 "uuid": "741d874f-f2f3-4794-a0cf-9a1ac0916140", 00:18:22.799 "md_size": 32, 00:18:22.799 "md_interleave": false, 00:18:22.799 "dif_type": 0, 00:18:22.799 "assigned_rate_limits": { 00:18:22.799 "rw_ios_per_sec": 0, 00:18:22.799 "rw_mbytes_per_sec": 0, 00:18:22.799 "r_mbytes_per_sec": 0, 00:18:22.799 "w_mbytes_per_sec": 0 00:18:22.799 }, 00:18:22.799 "claimed": false, 00:18:22.799 "zoned": false, 00:18:22.799 "supported_io_types": { 00:18:22.799 "read": true, 00:18:22.799 "write": true, 00:18:22.799 "unmap": false, 00:18:22.799 "flush": false, 00:18:22.799 "reset": true, 00:18:22.799 "nvme_admin": false, 00:18:22.799 "nvme_io": false, 00:18:22.799 "nvme_io_md": false, 00:18:22.799 "write_zeroes": true, 00:18:22.799 "zcopy": false, 00:18:22.799 "get_zone_info": false, 00:18:22.799 "zone_management": false, 00:18:22.799 "zone_append": false, 00:18:22.799 "compare": false, 00:18:22.799 "compare_and_write": false, 00:18:22.799 "abort": false, 00:18:22.799 "seek_hole": false, 00:18:22.799 "seek_data": false, 00:18:22.799 "copy": false, 00:18:22.799 "nvme_iov_md": false 00:18:22.799 }, 00:18:22.799 "memory_domains": [ 00:18:22.799 { 00:18:22.799 "dma_device_id": "system", 00:18:22.799 "dma_device_type": 1 00:18:22.799 }, 00:18:22.799 { 00:18:22.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.799 "dma_device_type": 2 00:18:22.799 }, 00:18:22.799 { 00:18:22.799 "dma_device_id": "system", 00:18:22.799 "dma_device_type": 1 00:18:22.799 }, 00:18:22.799 { 00:18:22.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.799 "dma_device_type": 2 00:18:22.799 } 00:18:22.799 ], 00:18:22.799 "driver_specific": { 00:18:22.799 "raid": { 00:18:22.799 "uuid": "741d874f-f2f3-4794-a0cf-9a1ac0916140", 00:18:22.799 "strip_size_kb": 0, 00:18:22.799 "state": "online", 00:18:22.799 "raid_level": "raid1", 00:18:22.799 "superblock": true, 00:18:22.799 "num_base_bdevs": 2, 00:18:22.799 "num_base_bdevs_discovered": 2, 00:18:22.799 "num_base_bdevs_operational": 2, 00:18:22.799 "base_bdevs_list": [ 00:18:22.799 { 00:18:22.799 "name": "pt1", 00:18:22.799 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:22.799 "is_configured": true, 00:18:22.799 "data_offset": 256, 00:18:22.799 "data_size": 7936 00:18:22.799 }, 00:18:22.799 { 00:18:22.799 "name": "pt2", 00:18:22.799 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:22.799 "is_configured": true, 00:18:22.799 "data_offset": 256, 00:18:22.799 "data_size": 7936 00:18:22.799 } 00:18:22.799 ] 00:18:22.799 } 00:18:22.799 } 00:18:22.799 }' 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:23.060 pt2' 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:23.060 [2024-11-15 09:37:11.449594] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=741d874f-f2f3-4794-a0cf-9a1ac0916140 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 741d874f-f2f3-4794-a0cf-9a1ac0916140 ']' 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.060 [2024-11-15 09:37:11.497236] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:23.060 [2024-11-15 09:37:11.497311] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:23.060 [2024-11-15 09:37:11.497432] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:23.060 [2024-11-15 09:37:11.497520] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:23.060 [2024-11-15 09:37:11.497586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.060 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.321 [2024-11-15 09:37:11.632999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:23.321 [2024-11-15 09:37:11.634905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:23.321 [2024-11-15 09:37:11.635034] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:23.321 [2024-11-15 09:37:11.635134] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:23.321 [2024-11-15 09:37:11.635187] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:23.321 [2024-11-15 09:37:11.635244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:23.321 request: 00:18:23.321 { 00:18:23.321 "name": "raid_bdev1", 00:18:23.321 "raid_level": "raid1", 00:18:23.321 "base_bdevs": [ 00:18:23.321 "malloc1", 00:18:23.321 "malloc2" 00:18:23.321 ], 00:18:23.321 "superblock": false, 00:18:23.321 "method": "bdev_raid_create", 00:18:23.321 "req_id": 1 00:18:23.321 } 00:18:23.321 Got JSON-RPC error response 00:18:23.321 response: 00:18:23.321 { 00:18:23.321 "code": -17, 00:18:23.321 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:23.321 } 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.321 [2024-11-15 09:37:11.696900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:23.321 [2024-11-15 09:37:11.696954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.321 [2024-11-15 09:37:11.696970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:23.321 [2024-11-15 09:37:11.696981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.321 [2024-11-15 09:37:11.698938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.321 [2024-11-15 09:37:11.699025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:23.321 [2024-11-15 09:37:11.699082] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:23.321 [2024-11-15 09:37:11.699144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:23.321 pt1 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.321 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.321 "name": "raid_bdev1", 00:18:23.321 "uuid": "741d874f-f2f3-4794-a0cf-9a1ac0916140", 00:18:23.321 "strip_size_kb": 0, 00:18:23.321 "state": "configuring", 00:18:23.321 "raid_level": "raid1", 00:18:23.321 "superblock": true, 00:18:23.321 "num_base_bdevs": 2, 00:18:23.321 "num_base_bdevs_discovered": 1, 00:18:23.321 "num_base_bdevs_operational": 2, 00:18:23.321 "base_bdevs_list": [ 00:18:23.321 { 00:18:23.321 "name": "pt1", 00:18:23.321 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:23.321 "is_configured": true, 00:18:23.321 "data_offset": 256, 00:18:23.321 "data_size": 7936 00:18:23.321 }, 00:18:23.321 { 00:18:23.321 "name": null, 00:18:23.321 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:23.321 "is_configured": false, 00:18:23.321 "data_offset": 256, 00:18:23.322 "data_size": 7936 00:18:23.322 } 00:18:23.322 ] 00:18:23.322 }' 00:18:23.322 09:37:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.322 09:37:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.890 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:23.890 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:23.890 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:23.890 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:23.890 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.890 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.890 [2024-11-15 09:37:12.164251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:23.890 [2024-11-15 09:37:12.164404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.890 [2024-11-15 09:37:12.164446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:23.890 [2024-11-15 09:37:12.164500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.890 [2024-11-15 09:37:12.164766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.890 [2024-11-15 09:37:12.164819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:23.890 [2024-11-15 09:37:12.164908] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:23.890 [2024-11-15 09:37:12.164960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:23.890 [2024-11-15 09:37:12.165113] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:23.890 [2024-11-15 09:37:12.165152] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:23.890 [2024-11-15 09:37:12.165241] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:23.890 [2024-11-15 09:37:12.165397] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:23.890 [2024-11-15 09:37:12.165431] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:23.890 [2024-11-15 09:37:12.165567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.890 pt2 00:18:23.890 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.890 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:23.890 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:23.890 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:23.890 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.890 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.890 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.890 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.890 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:23.890 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.890 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.890 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.890 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.890 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.891 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.891 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.891 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:23.891 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.891 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.891 "name": "raid_bdev1", 00:18:23.891 "uuid": "741d874f-f2f3-4794-a0cf-9a1ac0916140", 00:18:23.891 "strip_size_kb": 0, 00:18:23.891 "state": "online", 00:18:23.891 "raid_level": "raid1", 00:18:23.891 "superblock": true, 00:18:23.891 "num_base_bdevs": 2, 00:18:23.891 "num_base_bdevs_discovered": 2, 00:18:23.891 "num_base_bdevs_operational": 2, 00:18:23.891 "base_bdevs_list": [ 00:18:23.891 { 00:18:23.891 "name": "pt1", 00:18:23.891 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:23.891 "is_configured": true, 00:18:23.891 "data_offset": 256, 00:18:23.891 "data_size": 7936 00:18:23.891 }, 00:18:23.891 { 00:18:23.891 "name": "pt2", 00:18:23.891 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:23.891 "is_configured": true, 00:18:23.891 "data_offset": 256, 00:18:23.891 "data_size": 7936 00:18:23.891 } 00:18:23.891 ] 00:18:23.891 }' 00:18:23.891 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.891 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.461 [2024-11-15 09:37:12.659689] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:24.461 "name": "raid_bdev1", 00:18:24.461 "aliases": [ 00:18:24.461 "741d874f-f2f3-4794-a0cf-9a1ac0916140" 00:18:24.461 ], 00:18:24.461 "product_name": "Raid Volume", 00:18:24.461 "block_size": 4096, 00:18:24.461 "num_blocks": 7936, 00:18:24.461 "uuid": "741d874f-f2f3-4794-a0cf-9a1ac0916140", 00:18:24.461 "md_size": 32, 00:18:24.461 "md_interleave": false, 00:18:24.461 "dif_type": 0, 00:18:24.461 "assigned_rate_limits": { 00:18:24.461 "rw_ios_per_sec": 0, 00:18:24.461 "rw_mbytes_per_sec": 0, 00:18:24.461 "r_mbytes_per_sec": 0, 00:18:24.461 "w_mbytes_per_sec": 0 00:18:24.461 }, 00:18:24.461 "claimed": false, 00:18:24.461 "zoned": false, 00:18:24.461 "supported_io_types": { 00:18:24.461 "read": true, 00:18:24.461 "write": true, 00:18:24.461 "unmap": false, 00:18:24.461 "flush": false, 00:18:24.461 "reset": true, 00:18:24.461 "nvme_admin": false, 00:18:24.461 "nvme_io": false, 00:18:24.461 "nvme_io_md": false, 00:18:24.461 "write_zeroes": true, 00:18:24.461 "zcopy": false, 00:18:24.461 "get_zone_info": false, 00:18:24.461 "zone_management": false, 00:18:24.461 "zone_append": false, 00:18:24.461 "compare": false, 00:18:24.461 "compare_and_write": false, 00:18:24.461 "abort": false, 00:18:24.461 "seek_hole": false, 00:18:24.461 "seek_data": false, 00:18:24.461 "copy": false, 00:18:24.461 "nvme_iov_md": false 00:18:24.461 }, 00:18:24.461 "memory_domains": [ 00:18:24.461 { 00:18:24.461 "dma_device_id": "system", 00:18:24.461 "dma_device_type": 1 00:18:24.461 }, 00:18:24.461 { 00:18:24.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.461 "dma_device_type": 2 00:18:24.461 }, 00:18:24.461 { 00:18:24.461 "dma_device_id": "system", 00:18:24.461 "dma_device_type": 1 00:18:24.461 }, 00:18:24.461 { 00:18:24.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.461 "dma_device_type": 2 00:18:24.461 } 00:18:24.461 ], 00:18:24.461 "driver_specific": { 00:18:24.461 "raid": { 00:18:24.461 "uuid": "741d874f-f2f3-4794-a0cf-9a1ac0916140", 00:18:24.461 "strip_size_kb": 0, 00:18:24.461 "state": "online", 00:18:24.461 "raid_level": "raid1", 00:18:24.461 "superblock": true, 00:18:24.461 "num_base_bdevs": 2, 00:18:24.461 "num_base_bdevs_discovered": 2, 00:18:24.461 "num_base_bdevs_operational": 2, 00:18:24.461 "base_bdevs_list": [ 00:18:24.461 { 00:18:24.461 "name": "pt1", 00:18:24.461 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:24.461 "is_configured": true, 00:18:24.461 "data_offset": 256, 00:18:24.461 "data_size": 7936 00:18:24.461 }, 00:18:24.461 { 00:18:24.461 "name": "pt2", 00:18:24.461 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:24.461 "is_configured": true, 00:18:24.461 "data_offset": 256, 00:18:24.461 "data_size": 7936 00:18:24.461 } 00:18:24.461 ] 00:18:24.461 } 00:18:24.461 } 00:18:24.461 }' 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:24.461 pt2' 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.461 [2024-11-15 09:37:12.867326] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 741d874f-f2f3-4794-a0cf-9a1ac0916140 '!=' 741d874f-f2f3-4794-a0cf-9a1ac0916140 ']' 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.461 [2024-11-15 09:37:12.911046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.461 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.462 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.462 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.720 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.720 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.720 "name": "raid_bdev1", 00:18:24.720 "uuid": "741d874f-f2f3-4794-a0cf-9a1ac0916140", 00:18:24.720 "strip_size_kb": 0, 00:18:24.720 "state": "online", 00:18:24.720 "raid_level": "raid1", 00:18:24.720 "superblock": true, 00:18:24.720 "num_base_bdevs": 2, 00:18:24.720 "num_base_bdevs_discovered": 1, 00:18:24.720 "num_base_bdevs_operational": 1, 00:18:24.720 "base_bdevs_list": [ 00:18:24.720 { 00:18:24.720 "name": null, 00:18:24.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.720 "is_configured": false, 00:18:24.720 "data_offset": 0, 00:18:24.720 "data_size": 7936 00:18:24.720 }, 00:18:24.720 { 00:18:24.720 "name": "pt2", 00:18:24.720 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:24.720 "is_configured": true, 00:18:24.720 "data_offset": 256, 00:18:24.720 "data_size": 7936 00:18:24.720 } 00:18:24.720 ] 00:18:24.720 }' 00:18:24.720 09:37:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.720 09:37:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.980 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:24.980 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.980 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.980 [2024-11-15 09:37:13.374210] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:24.980 [2024-11-15 09:37:13.374243] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:24.980 [2024-11-15 09:37:13.374325] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.980 [2024-11-15 09:37:13.374371] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.980 [2024-11-15 09:37:13.374382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:24.980 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.980 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.980 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:24.980 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.980 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.980 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.980 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:24.980 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:24.980 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:24.980 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:24.980 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:24.980 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.980 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:24.980 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.980 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:24.980 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:24.980 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:24.980 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:24.980 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:18:24.980 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:24.980 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.980 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.239 [2024-11-15 09:37:13.450160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:25.239 [2024-11-15 09:37:13.450227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.239 [2024-11-15 09:37:13.450247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:25.239 [2024-11-15 09:37:13.450258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.239 [2024-11-15 09:37:13.452291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.239 [2024-11-15 09:37:13.452335] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:25.239 [2024-11-15 09:37:13.452391] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:25.239 [2024-11-15 09:37:13.452447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:25.239 [2024-11-15 09:37:13.452547] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:25.239 [2024-11-15 09:37:13.452559] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:25.239 [2024-11-15 09:37:13.452635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:25.239 [2024-11-15 09:37:13.452750] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:25.239 [2024-11-15 09:37:13.452758] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:25.239 [2024-11-15 09:37:13.452878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.239 pt2 00:18:25.239 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.239 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:25.239 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.239 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.239 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.239 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.239 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:25.239 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.239 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.239 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.239 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.239 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.239 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.239 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.239 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.239 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.239 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.239 "name": "raid_bdev1", 00:18:25.240 "uuid": "741d874f-f2f3-4794-a0cf-9a1ac0916140", 00:18:25.240 "strip_size_kb": 0, 00:18:25.240 "state": "online", 00:18:25.240 "raid_level": "raid1", 00:18:25.240 "superblock": true, 00:18:25.240 "num_base_bdevs": 2, 00:18:25.240 "num_base_bdevs_discovered": 1, 00:18:25.240 "num_base_bdevs_operational": 1, 00:18:25.240 "base_bdevs_list": [ 00:18:25.240 { 00:18:25.240 "name": null, 00:18:25.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.240 "is_configured": false, 00:18:25.240 "data_offset": 256, 00:18:25.240 "data_size": 7936 00:18:25.240 }, 00:18:25.240 { 00:18:25.240 "name": "pt2", 00:18:25.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:25.240 "is_configured": true, 00:18:25.240 "data_offset": 256, 00:18:25.240 "data_size": 7936 00:18:25.240 } 00:18:25.240 ] 00:18:25.240 }' 00:18:25.240 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.240 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.499 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:25.499 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.499 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.499 [2024-11-15 09:37:13.885391] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:25.499 [2024-11-15 09:37:13.885484] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:25.499 [2024-11-15 09:37:13.885589] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:25.499 [2024-11-15 09:37:13.885657] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:25.499 [2024-11-15 09:37:13.885669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:25.499 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.499 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.499 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:25.499 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.499 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.499 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.499 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:25.499 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:25.499 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:25.499 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:25.499 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.499 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.499 [2024-11-15 09:37:13.949289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:25.499 [2024-11-15 09:37:13.949404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.499 [2024-11-15 09:37:13.949440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:25.499 [2024-11-15 09:37:13.949468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.499 [2024-11-15 09:37:13.951455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.499 [2024-11-15 09:37:13.951528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:25.499 [2024-11-15 09:37:13.951605] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:25.499 [2024-11-15 09:37:13.951667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:25.499 [2024-11-15 09:37:13.951878] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:25.499 [2024-11-15 09:37:13.951931] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:25.499 [2024-11-15 09:37:13.951971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:25.499 [2024-11-15 09:37:13.952152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:25.499 [2024-11-15 09:37:13.952266] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:25.499 [2024-11-15 09:37:13.952304] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:25.499 [2024-11-15 09:37:13.952398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:25.499 [2024-11-15 09:37:13.952533] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:25.499 [2024-11-15 09:37:13.952571] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:25.499 [2024-11-15 09:37:13.952717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.499 pt1 00:18:25.499 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.499 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:25.500 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:25.500 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.500 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.500 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.500 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.500 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:25.500 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.500 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.500 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.500 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.500 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.500 09:37:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.500 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.500 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:25.759 09:37:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.759 09:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.759 "name": "raid_bdev1", 00:18:25.759 "uuid": "741d874f-f2f3-4794-a0cf-9a1ac0916140", 00:18:25.759 "strip_size_kb": 0, 00:18:25.759 "state": "online", 00:18:25.759 "raid_level": "raid1", 00:18:25.759 "superblock": true, 00:18:25.759 "num_base_bdevs": 2, 00:18:25.759 "num_base_bdevs_discovered": 1, 00:18:25.759 "num_base_bdevs_operational": 1, 00:18:25.759 "base_bdevs_list": [ 00:18:25.759 { 00:18:25.759 "name": null, 00:18:25.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.759 "is_configured": false, 00:18:25.759 "data_offset": 256, 00:18:25.759 "data_size": 7936 00:18:25.759 }, 00:18:25.759 { 00:18:25.759 "name": "pt2", 00:18:25.759 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:25.759 "is_configured": true, 00:18:25.759 "data_offset": 256, 00:18:25.759 "data_size": 7936 00:18:25.759 } 00:18:25.759 ] 00:18:25.759 }' 00:18:25.759 09:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.759 09:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.042 09:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:26.042 09:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:26.042 09:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.042 09:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.042 09:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.042 09:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:26.042 09:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:26.042 09:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.042 09:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:26.042 09:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:26.042 [2024-11-15 09:37:14.460798] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:26.042 09:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.302 09:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 741d874f-f2f3-4794-a0cf-9a1ac0916140 '!=' 741d874f-f2f3-4794-a0cf-9a1ac0916140 ']' 00:18:26.302 09:37:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87891 00:18:26.302 09:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87891 ']' 00:18:26.302 09:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # kill -0 87891 00:18:26.302 09:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # uname 00:18:26.302 09:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:26.302 09:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87891 00:18:26.302 09:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:26.302 09:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:26.302 killing process with pid 87891 00:18:26.302 09:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87891' 00:18:26.302 09:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@971 -- # kill 87891 00:18:26.302 [2024-11-15 09:37:14.547718] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:26.302 [2024-11-15 09:37:14.547817] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:26.302 [2024-11-15 09:37:14.547880] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:26.302 [2024-11-15 09:37:14.547898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:26.302 09:37:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@976 -- # wait 87891 00:18:26.302 [2024-11-15 09:37:14.757295] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:27.685 ************************************ 00:18:27.685 END TEST raid_superblock_test_md_separate 00:18:27.685 ************************************ 00:18:27.685 09:37:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:27.685 00:18:27.685 real 0m6.124s 00:18:27.685 user 0m9.313s 00:18:27.685 sys 0m1.110s 00:18:27.685 09:37:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:27.685 09:37:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.685 09:37:15 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:27.685 09:37:15 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:27.685 09:37:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:18:27.685 09:37:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:27.685 09:37:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:27.685 ************************************ 00:18:27.685 START TEST raid_rebuild_test_sb_md_separate 00:18:27.685 ************************************ 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88214 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88214 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 88214 ']' 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:27.685 09:37:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:27.685 [2024-11-15 09:37:15.991632] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:18:27.685 [2024-11-15 09:37:15.991832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88214 ] 00:18:27.685 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:27.685 Zero copy mechanism will not be used. 00:18:27.945 [2024-11-15 09:37:16.166302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.945 [2024-11-15 09:37:16.278490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.204 [2024-11-15 09:37:16.474620] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.204 [2024-11-15 09:37:16.474723] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.465 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:28.465 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:18:28.465 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:28.465 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:28.465 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.465 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.465 BaseBdev1_malloc 00:18:28.465 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.465 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:28.465 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.465 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.465 [2024-11-15 09:37:16.861004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:28.465 [2024-11-15 09:37:16.861099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.465 [2024-11-15 09:37:16.861125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:28.465 [2024-11-15 09:37:16.861136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.465 [2024-11-15 09:37:16.862979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.465 [2024-11-15 09:37:16.863017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:28.465 BaseBdev1 00:18:28.465 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.465 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:28.465 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:28.465 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.465 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.465 BaseBdev2_malloc 00:18:28.465 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.465 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:28.465 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.465 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.465 [2024-11-15 09:37:16.914918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:28.465 [2024-11-15 09:37:16.914976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.465 [2024-11-15 09:37:16.914994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:28.465 [2024-11-15 09:37:16.915004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.465 [2024-11-15 09:37:16.916759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.465 [2024-11-15 09:37:16.916864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:28.465 BaseBdev2 00:18:28.465 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.465 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:28.465 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.465 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.726 spare_malloc 00:18:28.726 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.726 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:28.726 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.726 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.726 spare_delay 00:18:28.726 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.726 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:28.726 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.726 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.726 [2024-11-15 09:37:16.994417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:28.726 [2024-11-15 09:37:16.994473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.726 [2024-11-15 09:37:16.994493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:28.726 [2024-11-15 09:37:16.994503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.726 [2024-11-15 09:37:16.996389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.726 [2024-11-15 09:37:16.996436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:28.726 spare 00:18:28.726 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.726 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:28.726 09:37:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.726 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.726 [2024-11-15 09:37:17.006445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:28.726 [2024-11-15 09:37:17.008160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:28.726 [2024-11-15 09:37:17.008386] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:28.726 [2024-11-15 09:37:17.008435] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:28.726 [2024-11-15 09:37:17.008525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:28.726 [2024-11-15 09:37:17.008672] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:28.726 [2024-11-15 09:37:17.008710] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:28.726 [2024-11-15 09:37:17.008880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.726 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.726 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:28.726 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.726 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.726 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.726 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.726 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:28.726 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.726 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.726 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.726 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.726 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.726 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.726 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.726 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:28.726 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.726 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.726 "name": "raid_bdev1", 00:18:28.726 "uuid": "6e1b4107-53aa-4929-96ea-371e30970cc2", 00:18:28.726 "strip_size_kb": 0, 00:18:28.726 "state": "online", 00:18:28.726 "raid_level": "raid1", 00:18:28.726 "superblock": true, 00:18:28.726 "num_base_bdevs": 2, 00:18:28.726 "num_base_bdevs_discovered": 2, 00:18:28.726 "num_base_bdevs_operational": 2, 00:18:28.726 "base_bdevs_list": [ 00:18:28.726 { 00:18:28.726 "name": "BaseBdev1", 00:18:28.726 "uuid": "be7f582d-ad6f-55bb-aa0d-150ba2676bdb", 00:18:28.726 "is_configured": true, 00:18:28.726 "data_offset": 256, 00:18:28.726 "data_size": 7936 00:18:28.726 }, 00:18:28.726 { 00:18:28.726 "name": "BaseBdev2", 00:18:28.726 "uuid": "4b7a27f5-2f9b-5ecd-891a-473333b0891e", 00:18:28.726 "is_configured": true, 00:18:28.726 "data_offset": 256, 00:18:28.726 "data_size": 7936 00:18:28.726 } 00:18:28.726 ] 00:18:28.726 }' 00:18:28.726 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.726 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.295 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:29.295 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.295 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:29.295 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.295 [2024-11-15 09:37:17.485930] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:29.295 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.295 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:29.295 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.295 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.295 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:29.295 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:29.295 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.295 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:29.295 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:29.295 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:29.295 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:29.295 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:29.295 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:29.295 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:29.295 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:29.295 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:29.295 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:29.295 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:29.295 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:29.295 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:29.295 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:29.555 [2024-11-15 09:37:17.769217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:29.555 /dev/nbd0 00:18:29.555 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:29.555 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:29.555 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:29.555 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:18:29.555 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:29.555 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:29.555 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:29.555 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:18:29.555 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:29.555 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:29.555 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:29.555 1+0 records in 00:18:29.555 1+0 records out 00:18:29.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311166 s, 13.2 MB/s 00:18:29.555 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:29.555 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:18:29.555 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:29.556 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:29.556 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:18:29.556 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:29.556 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:29.556 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:29.556 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:29.556 09:37:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:30.126 7936+0 records in 00:18:30.126 7936+0 records out 00:18:30.126 32505856 bytes (33 MB, 31 MiB) copied, 0.607626 s, 53.5 MB/s 00:18:30.126 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:30.126 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:30.126 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:30.126 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:30.126 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:30.126 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:30.126 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:30.386 [2024-11-15 09:37:18.658142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.386 [2024-11-15 09:37:18.678911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.386 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.386 "name": "raid_bdev1", 00:18:30.386 "uuid": "6e1b4107-53aa-4929-96ea-371e30970cc2", 00:18:30.386 "strip_size_kb": 0, 00:18:30.386 "state": "online", 00:18:30.386 "raid_level": "raid1", 00:18:30.386 "superblock": true, 00:18:30.386 "num_base_bdevs": 2, 00:18:30.386 "num_base_bdevs_discovered": 1, 00:18:30.386 "num_base_bdevs_operational": 1, 00:18:30.386 "base_bdevs_list": [ 00:18:30.386 { 00:18:30.386 "name": null, 00:18:30.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.386 "is_configured": false, 00:18:30.386 "data_offset": 0, 00:18:30.386 "data_size": 7936 00:18:30.386 }, 00:18:30.386 { 00:18:30.387 "name": "BaseBdev2", 00:18:30.387 "uuid": "4b7a27f5-2f9b-5ecd-891a-473333b0891e", 00:18:30.387 "is_configured": true, 00:18:30.387 "data_offset": 256, 00:18:30.387 "data_size": 7936 00:18:30.387 } 00:18:30.387 ] 00:18:30.387 }' 00:18:30.387 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.387 09:37:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.957 09:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:30.957 09:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.957 09:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:30.957 [2024-11-15 09:37:19.162099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:30.957 [2024-11-15 09:37:19.177778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:30.957 09:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.957 09:37:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:30.957 [2024-11-15 09:37:19.180100] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:31.896 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:31.896 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.896 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:31.896 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:31.896 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.896 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.896 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.896 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.896 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.896 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.896 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.896 "name": "raid_bdev1", 00:18:31.896 "uuid": "6e1b4107-53aa-4929-96ea-371e30970cc2", 00:18:31.896 "strip_size_kb": 0, 00:18:31.896 "state": "online", 00:18:31.896 "raid_level": "raid1", 00:18:31.896 "superblock": true, 00:18:31.896 "num_base_bdevs": 2, 00:18:31.896 "num_base_bdevs_discovered": 2, 00:18:31.896 "num_base_bdevs_operational": 2, 00:18:31.896 "process": { 00:18:31.896 "type": "rebuild", 00:18:31.896 "target": "spare", 00:18:31.896 "progress": { 00:18:31.896 "blocks": 2560, 00:18:31.896 "percent": 32 00:18:31.896 } 00:18:31.896 }, 00:18:31.896 "base_bdevs_list": [ 00:18:31.896 { 00:18:31.896 "name": "spare", 00:18:31.896 "uuid": "64e989df-b15b-542d-b38d-ce335aba155a", 00:18:31.896 "is_configured": true, 00:18:31.896 "data_offset": 256, 00:18:31.896 "data_size": 7936 00:18:31.896 }, 00:18:31.896 { 00:18:31.896 "name": "BaseBdev2", 00:18:31.896 "uuid": "4b7a27f5-2f9b-5ecd-891a-473333b0891e", 00:18:31.896 "is_configured": true, 00:18:31.896 "data_offset": 256, 00:18:31.896 "data_size": 7936 00:18:31.896 } 00:18:31.896 ] 00:18:31.896 }' 00:18:31.896 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.896 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:31.896 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.896 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:31.896 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:31.896 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.896 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:31.896 [2024-11-15 09:37:20.344049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:32.156 [2024-11-15 09:37:20.391095] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:32.156 [2024-11-15 09:37:20.391214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.156 [2024-11-15 09:37:20.391231] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:32.156 [2024-11-15 09:37:20.391243] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:32.156 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.156 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:32.156 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.156 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.156 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.156 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.156 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:32.156 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.156 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.156 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.156 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.156 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.156 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.156 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.156 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.156 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.156 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.156 "name": "raid_bdev1", 00:18:32.156 "uuid": "6e1b4107-53aa-4929-96ea-371e30970cc2", 00:18:32.156 "strip_size_kb": 0, 00:18:32.156 "state": "online", 00:18:32.156 "raid_level": "raid1", 00:18:32.156 "superblock": true, 00:18:32.156 "num_base_bdevs": 2, 00:18:32.156 "num_base_bdevs_discovered": 1, 00:18:32.156 "num_base_bdevs_operational": 1, 00:18:32.156 "base_bdevs_list": [ 00:18:32.156 { 00:18:32.156 "name": null, 00:18:32.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.156 "is_configured": false, 00:18:32.156 "data_offset": 0, 00:18:32.156 "data_size": 7936 00:18:32.156 }, 00:18:32.156 { 00:18:32.156 "name": "BaseBdev2", 00:18:32.156 "uuid": "4b7a27f5-2f9b-5ecd-891a-473333b0891e", 00:18:32.156 "is_configured": true, 00:18:32.156 "data_offset": 256, 00:18:32.156 "data_size": 7936 00:18:32.156 } 00:18:32.156 ] 00:18:32.156 }' 00:18:32.156 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.156 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.416 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:32.416 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.416 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:32.416 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:32.416 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.416 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.416 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.416 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.416 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.677 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.677 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.677 "name": "raid_bdev1", 00:18:32.677 "uuid": "6e1b4107-53aa-4929-96ea-371e30970cc2", 00:18:32.677 "strip_size_kb": 0, 00:18:32.677 "state": "online", 00:18:32.677 "raid_level": "raid1", 00:18:32.677 "superblock": true, 00:18:32.677 "num_base_bdevs": 2, 00:18:32.677 "num_base_bdevs_discovered": 1, 00:18:32.677 "num_base_bdevs_operational": 1, 00:18:32.677 "base_bdevs_list": [ 00:18:32.677 { 00:18:32.677 "name": null, 00:18:32.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.677 "is_configured": false, 00:18:32.677 "data_offset": 0, 00:18:32.677 "data_size": 7936 00:18:32.677 }, 00:18:32.677 { 00:18:32.677 "name": "BaseBdev2", 00:18:32.677 "uuid": "4b7a27f5-2f9b-5ecd-891a-473333b0891e", 00:18:32.677 "is_configured": true, 00:18:32.677 "data_offset": 256, 00:18:32.677 "data_size": 7936 00:18:32.677 } 00:18:32.677 ] 00:18:32.677 }' 00:18:32.677 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.677 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:32.677 09:37:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.677 09:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:32.677 09:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:32.677 09:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.677 09:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:32.677 [2024-11-15 09:37:21.021060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:32.677 [2024-11-15 09:37:21.035540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:32.677 09:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.677 09:37:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:32.677 [2024-11-15 09:37:21.037912] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:33.617 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:33.617 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.617 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:33.617 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:33.617 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.617 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.617 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.617 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.617 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.617 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.876 "name": "raid_bdev1", 00:18:33.876 "uuid": "6e1b4107-53aa-4929-96ea-371e30970cc2", 00:18:33.876 "strip_size_kb": 0, 00:18:33.876 "state": "online", 00:18:33.876 "raid_level": "raid1", 00:18:33.876 "superblock": true, 00:18:33.876 "num_base_bdevs": 2, 00:18:33.876 "num_base_bdevs_discovered": 2, 00:18:33.876 "num_base_bdevs_operational": 2, 00:18:33.876 "process": { 00:18:33.876 "type": "rebuild", 00:18:33.876 "target": "spare", 00:18:33.876 "progress": { 00:18:33.876 "blocks": 2560, 00:18:33.876 "percent": 32 00:18:33.876 } 00:18:33.876 }, 00:18:33.876 "base_bdevs_list": [ 00:18:33.876 { 00:18:33.876 "name": "spare", 00:18:33.876 "uuid": "64e989df-b15b-542d-b38d-ce335aba155a", 00:18:33.876 "is_configured": true, 00:18:33.876 "data_offset": 256, 00:18:33.876 "data_size": 7936 00:18:33.876 }, 00:18:33.876 { 00:18:33.876 "name": "BaseBdev2", 00:18:33.876 "uuid": "4b7a27f5-2f9b-5ecd-891a-473333b0891e", 00:18:33.876 "is_configured": true, 00:18:33.876 "data_offset": 256, 00:18:33.876 "data_size": 7936 00:18:33.876 } 00:18:33.876 ] 00:18:33.876 }' 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:33.876 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=736 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.876 "name": "raid_bdev1", 00:18:33.876 "uuid": "6e1b4107-53aa-4929-96ea-371e30970cc2", 00:18:33.876 "strip_size_kb": 0, 00:18:33.876 "state": "online", 00:18:33.876 "raid_level": "raid1", 00:18:33.876 "superblock": true, 00:18:33.876 "num_base_bdevs": 2, 00:18:33.876 "num_base_bdevs_discovered": 2, 00:18:33.876 "num_base_bdevs_operational": 2, 00:18:33.876 "process": { 00:18:33.876 "type": "rebuild", 00:18:33.876 "target": "spare", 00:18:33.876 "progress": { 00:18:33.876 "blocks": 2816, 00:18:33.876 "percent": 35 00:18:33.876 } 00:18:33.876 }, 00:18:33.876 "base_bdevs_list": [ 00:18:33.876 { 00:18:33.876 "name": "spare", 00:18:33.876 "uuid": "64e989df-b15b-542d-b38d-ce335aba155a", 00:18:33.876 "is_configured": true, 00:18:33.876 "data_offset": 256, 00:18:33.876 "data_size": 7936 00:18:33.876 }, 00:18:33.876 { 00:18:33.876 "name": "BaseBdev2", 00:18:33.876 "uuid": "4b7a27f5-2f9b-5ecd-891a-473333b0891e", 00:18:33.876 "is_configured": true, 00:18:33.876 "data_offset": 256, 00:18:33.876 "data_size": 7936 00:18:33.876 } 00:18:33.876 ] 00:18:33.876 }' 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:33.876 09:37:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:35.257 09:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:35.257 09:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:35.257 09:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.257 09:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:35.257 09:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:35.257 09:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.257 09:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.257 09:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.257 09:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.257 09:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:35.257 09:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.257 09:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.257 "name": "raid_bdev1", 00:18:35.257 "uuid": "6e1b4107-53aa-4929-96ea-371e30970cc2", 00:18:35.257 "strip_size_kb": 0, 00:18:35.257 "state": "online", 00:18:35.258 "raid_level": "raid1", 00:18:35.258 "superblock": true, 00:18:35.258 "num_base_bdevs": 2, 00:18:35.258 "num_base_bdevs_discovered": 2, 00:18:35.258 "num_base_bdevs_operational": 2, 00:18:35.258 "process": { 00:18:35.258 "type": "rebuild", 00:18:35.258 "target": "spare", 00:18:35.258 "progress": { 00:18:35.258 "blocks": 5632, 00:18:35.258 "percent": 70 00:18:35.258 } 00:18:35.258 }, 00:18:35.258 "base_bdevs_list": [ 00:18:35.258 { 00:18:35.258 "name": "spare", 00:18:35.258 "uuid": "64e989df-b15b-542d-b38d-ce335aba155a", 00:18:35.258 "is_configured": true, 00:18:35.258 "data_offset": 256, 00:18:35.258 "data_size": 7936 00:18:35.258 }, 00:18:35.258 { 00:18:35.258 "name": "BaseBdev2", 00:18:35.258 "uuid": "4b7a27f5-2f9b-5ecd-891a-473333b0891e", 00:18:35.258 "is_configured": true, 00:18:35.258 "data_offset": 256, 00:18:35.258 "data_size": 7936 00:18:35.258 } 00:18:35.258 ] 00:18:35.258 }' 00:18:35.258 09:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.258 09:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:35.258 09:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.258 09:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.258 09:37:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:35.826 [2024-11-15 09:37:24.165169] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:35.826 [2024-11-15 09:37:24.165279] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:35.826 [2024-11-15 09:37:24.165440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.084 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:36.084 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.084 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.084 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:36.084 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:36.084 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.084 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.084 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.084 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.084 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.084 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.084 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.085 "name": "raid_bdev1", 00:18:36.085 "uuid": "6e1b4107-53aa-4929-96ea-371e30970cc2", 00:18:36.085 "strip_size_kb": 0, 00:18:36.085 "state": "online", 00:18:36.085 "raid_level": "raid1", 00:18:36.085 "superblock": true, 00:18:36.085 "num_base_bdevs": 2, 00:18:36.085 "num_base_bdevs_discovered": 2, 00:18:36.085 "num_base_bdevs_operational": 2, 00:18:36.085 "base_bdevs_list": [ 00:18:36.085 { 00:18:36.085 "name": "spare", 00:18:36.085 "uuid": "64e989df-b15b-542d-b38d-ce335aba155a", 00:18:36.085 "is_configured": true, 00:18:36.085 "data_offset": 256, 00:18:36.085 "data_size": 7936 00:18:36.085 }, 00:18:36.085 { 00:18:36.085 "name": "BaseBdev2", 00:18:36.085 "uuid": "4b7a27f5-2f9b-5ecd-891a-473333b0891e", 00:18:36.085 "is_configured": true, 00:18:36.085 "data_offset": 256, 00:18:36.085 "data_size": 7936 00:18:36.085 } 00:18:36.085 ] 00:18:36.085 }' 00:18:36.085 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.345 "name": "raid_bdev1", 00:18:36.345 "uuid": "6e1b4107-53aa-4929-96ea-371e30970cc2", 00:18:36.345 "strip_size_kb": 0, 00:18:36.345 "state": "online", 00:18:36.345 "raid_level": "raid1", 00:18:36.345 "superblock": true, 00:18:36.345 "num_base_bdevs": 2, 00:18:36.345 "num_base_bdevs_discovered": 2, 00:18:36.345 "num_base_bdevs_operational": 2, 00:18:36.345 "base_bdevs_list": [ 00:18:36.345 { 00:18:36.345 "name": "spare", 00:18:36.345 "uuid": "64e989df-b15b-542d-b38d-ce335aba155a", 00:18:36.345 "is_configured": true, 00:18:36.345 "data_offset": 256, 00:18:36.345 "data_size": 7936 00:18:36.345 }, 00:18:36.345 { 00:18:36.345 "name": "BaseBdev2", 00:18:36.345 "uuid": "4b7a27f5-2f9b-5ecd-891a-473333b0891e", 00:18:36.345 "is_configured": true, 00:18:36.345 "data_offset": 256, 00:18:36.345 "data_size": 7936 00:18:36.345 } 00:18:36.345 ] 00:18:36.345 }' 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.345 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.605 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.605 "name": "raid_bdev1", 00:18:36.605 "uuid": "6e1b4107-53aa-4929-96ea-371e30970cc2", 00:18:36.605 "strip_size_kb": 0, 00:18:36.605 "state": "online", 00:18:36.605 "raid_level": "raid1", 00:18:36.605 "superblock": true, 00:18:36.605 "num_base_bdevs": 2, 00:18:36.605 "num_base_bdevs_discovered": 2, 00:18:36.606 "num_base_bdevs_operational": 2, 00:18:36.606 "base_bdevs_list": [ 00:18:36.606 { 00:18:36.606 "name": "spare", 00:18:36.606 "uuid": "64e989df-b15b-542d-b38d-ce335aba155a", 00:18:36.606 "is_configured": true, 00:18:36.606 "data_offset": 256, 00:18:36.606 "data_size": 7936 00:18:36.606 }, 00:18:36.606 { 00:18:36.606 "name": "BaseBdev2", 00:18:36.606 "uuid": "4b7a27f5-2f9b-5ecd-891a-473333b0891e", 00:18:36.606 "is_configured": true, 00:18:36.606 "data_offset": 256, 00:18:36.606 "data_size": 7936 00:18:36.606 } 00:18:36.606 ] 00:18:36.606 }' 00:18:36.606 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.606 09:37:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.866 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:36.866 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.866 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.866 [2024-11-15 09:37:25.190972] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:36.866 [2024-11-15 09:37:25.191024] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:36.866 [2024-11-15 09:37:25.191160] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:36.866 [2024-11-15 09:37:25.191250] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:36.866 [2024-11-15 09:37:25.191263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:36.866 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.866 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.866 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.866 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:36.866 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:36.866 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.866 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:36.866 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:36.866 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:36.866 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:36.866 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:36.866 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:36.866 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:36.866 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:36.866 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:36.866 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:36.866 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:36.866 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:36.866 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:37.126 /dev/nbd0 00:18:37.126 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:37.126 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:37.126 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:37.126 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:18:37.126 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:37.126 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:37.126 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:37.126 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:18:37.126 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:37.126 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:37.126 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:37.126 1+0 records in 00:18:37.126 1+0 records out 00:18:37.126 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000736244 s, 5.6 MB/s 00:18:37.126 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:37.126 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:18:37.126 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:37.126 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:37.126 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:18:37.126 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:37.126 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:37.126 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:37.387 /dev/nbd1 00:18:37.387 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:37.387 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:37.387 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:18:37.387 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:18:37.387 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:37.387 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:37.387 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:18:37.387 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:18:37.387 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:37.387 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:37.387 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:37.387 1+0 records in 00:18:37.387 1+0 records out 00:18:37.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533824 s, 7.7 MB/s 00:18:37.387 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:37.387 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:18:37.387 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:37.387 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:37.387 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:18:37.387 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:37.388 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:37.388 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:37.648 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:37.648 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:37.649 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:37.649 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:37.649 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:37.649 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:37.649 09:37:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:37.909 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:37.909 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:37.909 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:37.909 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:37.909 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:37.909 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:37.909 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:37.909 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:37.909 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:37.909 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.169 [2024-11-15 09:37:26.459921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:38.169 [2024-11-15 09:37:26.460037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.169 [2024-11-15 09:37:26.460077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:38.169 [2024-11-15 09:37:26.460088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.169 [2024-11-15 09:37:26.462573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.169 [2024-11-15 09:37:26.462628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:38.169 [2024-11-15 09:37:26.462726] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:38.169 [2024-11-15 09:37:26.462804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:38.169 [2024-11-15 09:37:26.462981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:38.169 spare 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.169 [2024-11-15 09:37:26.562900] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:38.169 [2024-11-15 09:37:26.562989] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:38.169 [2024-11-15 09:37:26.563192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:38.169 [2024-11-15 09:37:26.563412] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:38.169 [2024-11-15 09:37:26.563421] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:38.169 [2024-11-15 09:37:26.563652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.169 "name": "raid_bdev1", 00:18:38.169 "uuid": "6e1b4107-53aa-4929-96ea-371e30970cc2", 00:18:38.169 "strip_size_kb": 0, 00:18:38.169 "state": "online", 00:18:38.169 "raid_level": "raid1", 00:18:38.169 "superblock": true, 00:18:38.169 "num_base_bdevs": 2, 00:18:38.169 "num_base_bdevs_discovered": 2, 00:18:38.169 "num_base_bdevs_operational": 2, 00:18:38.169 "base_bdevs_list": [ 00:18:38.169 { 00:18:38.169 "name": "spare", 00:18:38.169 "uuid": "64e989df-b15b-542d-b38d-ce335aba155a", 00:18:38.169 "is_configured": true, 00:18:38.169 "data_offset": 256, 00:18:38.169 "data_size": 7936 00:18:38.169 }, 00:18:38.169 { 00:18:38.169 "name": "BaseBdev2", 00:18:38.169 "uuid": "4b7a27f5-2f9b-5ecd-891a-473333b0891e", 00:18:38.169 "is_configured": true, 00:18:38.169 "data_offset": 256, 00:18:38.169 "data_size": 7936 00:18:38.169 } 00:18:38.169 ] 00:18:38.169 }' 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.169 09:37:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.740 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:38.740 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.740 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:38.740 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:38.740 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.740 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.740 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.740 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.740 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.740 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.740 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.740 "name": "raid_bdev1", 00:18:38.740 "uuid": "6e1b4107-53aa-4929-96ea-371e30970cc2", 00:18:38.740 "strip_size_kb": 0, 00:18:38.740 "state": "online", 00:18:38.741 "raid_level": "raid1", 00:18:38.741 "superblock": true, 00:18:38.741 "num_base_bdevs": 2, 00:18:38.741 "num_base_bdevs_discovered": 2, 00:18:38.741 "num_base_bdevs_operational": 2, 00:18:38.741 "base_bdevs_list": [ 00:18:38.741 { 00:18:38.741 "name": "spare", 00:18:38.741 "uuid": "64e989df-b15b-542d-b38d-ce335aba155a", 00:18:38.741 "is_configured": true, 00:18:38.741 "data_offset": 256, 00:18:38.741 "data_size": 7936 00:18:38.741 }, 00:18:38.741 { 00:18:38.741 "name": "BaseBdev2", 00:18:38.741 "uuid": "4b7a27f5-2f9b-5ecd-891a-473333b0891e", 00:18:38.741 "is_configured": true, 00:18:38.741 "data_offset": 256, 00:18:38.741 "data_size": 7936 00:18:38.741 } 00:18:38.741 ] 00:18:38.741 }' 00:18:38.741 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.741 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:38.741 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.741 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:38.741 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.741 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.741 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.741 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:38.741 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.741 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:38.741 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:38.741 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.741 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:38.741 [2024-11-15 09:37:27.202801] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:39.001 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.001 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:39.001 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.001 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.001 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.001 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.001 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:39.001 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.001 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.001 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.001 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.001 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.001 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.001 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.001 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.001 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.001 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.001 "name": "raid_bdev1", 00:18:39.001 "uuid": "6e1b4107-53aa-4929-96ea-371e30970cc2", 00:18:39.001 "strip_size_kb": 0, 00:18:39.001 "state": "online", 00:18:39.001 "raid_level": "raid1", 00:18:39.001 "superblock": true, 00:18:39.001 "num_base_bdevs": 2, 00:18:39.001 "num_base_bdevs_discovered": 1, 00:18:39.001 "num_base_bdevs_operational": 1, 00:18:39.001 "base_bdevs_list": [ 00:18:39.001 { 00:18:39.001 "name": null, 00:18:39.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.001 "is_configured": false, 00:18:39.001 "data_offset": 0, 00:18:39.001 "data_size": 7936 00:18:39.001 }, 00:18:39.001 { 00:18:39.001 "name": "BaseBdev2", 00:18:39.001 "uuid": "4b7a27f5-2f9b-5ecd-891a-473333b0891e", 00:18:39.001 "is_configured": true, 00:18:39.001 "data_offset": 256, 00:18:39.001 "data_size": 7936 00:18:39.001 } 00:18:39.001 ] 00:18:39.001 }' 00:18:39.001 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.001 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.261 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:39.261 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.262 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.262 [2024-11-15 09:37:27.682086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:39.262 [2024-11-15 09:37:27.682491] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:39.262 [2024-11-15 09:37:27.682570] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:39.262 [2024-11-15 09:37:27.682646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:39.262 [2024-11-15 09:37:27.698385] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:39.262 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.262 09:37:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:39.262 [2024-11-15 09:37:27.701089] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.644 "name": "raid_bdev1", 00:18:40.644 "uuid": "6e1b4107-53aa-4929-96ea-371e30970cc2", 00:18:40.644 "strip_size_kb": 0, 00:18:40.644 "state": "online", 00:18:40.644 "raid_level": "raid1", 00:18:40.644 "superblock": true, 00:18:40.644 "num_base_bdevs": 2, 00:18:40.644 "num_base_bdevs_discovered": 2, 00:18:40.644 "num_base_bdevs_operational": 2, 00:18:40.644 "process": { 00:18:40.644 "type": "rebuild", 00:18:40.644 "target": "spare", 00:18:40.644 "progress": { 00:18:40.644 "blocks": 2560, 00:18:40.644 "percent": 32 00:18:40.644 } 00:18:40.644 }, 00:18:40.644 "base_bdevs_list": [ 00:18:40.644 { 00:18:40.644 "name": "spare", 00:18:40.644 "uuid": "64e989df-b15b-542d-b38d-ce335aba155a", 00:18:40.644 "is_configured": true, 00:18:40.644 "data_offset": 256, 00:18:40.644 "data_size": 7936 00:18:40.644 }, 00:18:40.644 { 00:18:40.644 "name": "BaseBdev2", 00:18:40.644 "uuid": "4b7a27f5-2f9b-5ecd-891a-473333b0891e", 00:18:40.644 "is_configured": true, 00:18:40.644 "data_offset": 256, 00:18:40.644 "data_size": 7936 00:18:40.644 } 00:18:40.644 ] 00:18:40.644 }' 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.644 [2024-11-15 09:37:28.864367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:40.644 [2024-11-15 09:37:28.911760] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:40.644 [2024-11-15 09:37:28.911968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.644 [2024-11-15 09:37:28.911987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:40.644 [2024-11-15 09:37:28.912013] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.644 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.644 "name": "raid_bdev1", 00:18:40.644 "uuid": "6e1b4107-53aa-4929-96ea-371e30970cc2", 00:18:40.644 "strip_size_kb": 0, 00:18:40.644 "state": "online", 00:18:40.644 "raid_level": "raid1", 00:18:40.644 "superblock": true, 00:18:40.644 "num_base_bdevs": 2, 00:18:40.644 "num_base_bdevs_discovered": 1, 00:18:40.644 "num_base_bdevs_operational": 1, 00:18:40.644 "base_bdevs_list": [ 00:18:40.644 { 00:18:40.644 "name": null, 00:18:40.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.644 "is_configured": false, 00:18:40.645 "data_offset": 0, 00:18:40.645 "data_size": 7936 00:18:40.645 }, 00:18:40.645 { 00:18:40.645 "name": "BaseBdev2", 00:18:40.645 "uuid": "4b7a27f5-2f9b-5ecd-891a-473333b0891e", 00:18:40.645 "is_configured": true, 00:18:40.645 "data_offset": 256, 00:18:40.645 "data_size": 7936 00:18:40.645 } 00:18:40.645 ] 00:18:40.645 }' 00:18:40.645 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.645 09:37:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.904 09:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:40.904 09:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.904 09:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.904 [2024-11-15 09:37:29.365894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:40.904 [2024-11-15 09:37:29.365989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.904 [2024-11-15 09:37:29.366021] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:40.904 [2024-11-15 09:37:29.366034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.904 [2024-11-15 09:37:29.366379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.904 [2024-11-15 09:37:29.366408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:40.904 [2024-11-15 09:37:29.366483] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:40.904 [2024-11-15 09:37:29.366500] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:40.904 [2024-11-15 09:37:29.366511] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:40.904 [2024-11-15 09:37:29.366534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:41.164 [2024-11-15 09:37:29.381425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:41.164 spare 00:18:41.164 09:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.164 09:37:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:41.164 [2024-11-15 09:37:29.383686] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:42.105 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.105 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.105 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.105 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.105 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.105 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.105 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.105 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.105 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.105 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.105 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.105 "name": "raid_bdev1", 00:18:42.105 "uuid": "6e1b4107-53aa-4929-96ea-371e30970cc2", 00:18:42.105 "strip_size_kb": 0, 00:18:42.105 "state": "online", 00:18:42.105 "raid_level": "raid1", 00:18:42.105 "superblock": true, 00:18:42.105 "num_base_bdevs": 2, 00:18:42.105 "num_base_bdevs_discovered": 2, 00:18:42.105 "num_base_bdevs_operational": 2, 00:18:42.105 "process": { 00:18:42.105 "type": "rebuild", 00:18:42.105 "target": "spare", 00:18:42.105 "progress": { 00:18:42.105 "blocks": 2560, 00:18:42.105 "percent": 32 00:18:42.105 } 00:18:42.105 }, 00:18:42.105 "base_bdevs_list": [ 00:18:42.105 { 00:18:42.105 "name": "spare", 00:18:42.105 "uuid": "64e989df-b15b-542d-b38d-ce335aba155a", 00:18:42.105 "is_configured": true, 00:18:42.105 "data_offset": 256, 00:18:42.105 "data_size": 7936 00:18:42.105 }, 00:18:42.105 { 00:18:42.105 "name": "BaseBdev2", 00:18:42.105 "uuid": "4b7a27f5-2f9b-5ecd-891a-473333b0891e", 00:18:42.105 "is_configured": true, 00:18:42.105 "data_offset": 256, 00:18:42.105 "data_size": 7936 00:18:42.105 } 00:18:42.105 ] 00:18:42.105 }' 00:18:42.105 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.105 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.105 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.105 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.105 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:42.105 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.105 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.105 [2024-11-15 09:37:30.521158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:42.366 [2024-11-15 09:37:30.593924] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:42.366 [2024-11-15 09:37:30.594011] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.366 [2024-11-15 09:37:30.594031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:42.366 [2024-11-15 09:37:30.594039] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:42.366 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.366 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:42.366 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.366 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.366 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.366 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.366 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:42.366 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.366 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.366 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.366 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.366 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.366 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.366 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.366 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.366 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.366 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.366 "name": "raid_bdev1", 00:18:42.366 "uuid": "6e1b4107-53aa-4929-96ea-371e30970cc2", 00:18:42.366 "strip_size_kb": 0, 00:18:42.366 "state": "online", 00:18:42.366 "raid_level": "raid1", 00:18:42.366 "superblock": true, 00:18:42.366 "num_base_bdevs": 2, 00:18:42.366 "num_base_bdevs_discovered": 1, 00:18:42.366 "num_base_bdevs_operational": 1, 00:18:42.366 "base_bdevs_list": [ 00:18:42.366 { 00:18:42.366 "name": null, 00:18:42.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.366 "is_configured": false, 00:18:42.366 "data_offset": 0, 00:18:42.366 "data_size": 7936 00:18:42.366 }, 00:18:42.366 { 00:18:42.366 "name": "BaseBdev2", 00:18:42.366 "uuid": "4b7a27f5-2f9b-5ecd-891a-473333b0891e", 00:18:42.366 "is_configured": true, 00:18:42.366 "data_offset": 256, 00:18:42.366 "data_size": 7936 00:18:42.366 } 00:18:42.366 ] 00:18:42.366 }' 00:18:42.366 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.366 09:37:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.626 09:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:42.626 09:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.626 09:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:42.626 09:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:42.626 09:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.626 09:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.626 09:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.626 09:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.626 09:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.626 09:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.887 09:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.887 "name": "raid_bdev1", 00:18:42.887 "uuid": "6e1b4107-53aa-4929-96ea-371e30970cc2", 00:18:42.887 "strip_size_kb": 0, 00:18:42.887 "state": "online", 00:18:42.887 "raid_level": "raid1", 00:18:42.887 "superblock": true, 00:18:42.887 "num_base_bdevs": 2, 00:18:42.887 "num_base_bdevs_discovered": 1, 00:18:42.887 "num_base_bdevs_operational": 1, 00:18:42.887 "base_bdevs_list": [ 00:18:42.887 { 00:18:42.887 "name": null, 00:18:42.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.887 "is_configured": false, 00:18:42.887 "data_offset": 0, 00:18:42.887 "data_size": 7936 00:18:42.887 }, 00:18:42.887 { 00:18:42.887 "name": "BaseBdev2", 00:18:42.887 "uuid": "4b7a27f5-2f9b-5ecd-891a-473333b0891e", 00:18:42.887 "is_configured": true, 00:18:42.887 "data_offset": 256, 00:18:42.887 "data_size": 7936 00:18:42.887 } 00:18:42.887 ] 00:18:42.887 }' 00:18:42.887 09:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.887 09:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:42.887 09:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.887 09:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:42.887 09:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:42.887 09:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.887 09:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.887 09:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.887 09:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:42.887 09:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.887 09:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.887 [2024-11-15 09:37:31.220175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:42.887 [2024-11-15 09:37:31.220268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.887 [2024-11-15 09:37:31.220300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:42.887 [2024-11-15 09:37:31.220311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.887 [2024-11-15 09:37:31.220604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.887 [2024-11-15 09:37:31.220615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:42.887 [2024-11-15 09:37:31.220683] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:42.887 [2024-11-15 09:37:31.220700] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:42.887 [2024-11-15 09:37:31.220712] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:42.887 [2024-11-15 09:37:31.220724] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:42.887 BaseBdev1 00:18:42.887 09:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.887 09:37:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:43.826 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:43.826 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.826 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.826 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.826 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.826 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:43.826 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.826 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.826 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.826 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.826 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.826 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.826 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.826 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.826 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.826 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.826 "name": "raid_bdev1", 00:18:43.826 "uuid": "6e1b4107-53aa-4929-96ea-371e30970cc2", 00:18:43.826 "strip_size_kb": 0, 00:18:43.826 "state": "online", 00:18:43.826 "raid_level": "raid1", 00:18:43.826 "superblock": true, 00:18:43.826 "num_base_bdevs": 2, 00:18:43.826 "num_base_bdevs_discovered": 1, 00:18:43.826 "num_base_bdevs_operational": 1, 00:18:43.826 "base_bdevs_list": [ 00:18:43.826 { 00:18:43.826 "name": null, 00:18:43.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.826 "is_configured": false, 00:18:43.826 "data_offset": 0, 00:18:43.826 "data_size": 7936 00:18:43.826 }, 00:18:43.826 { 00:18:43.826 "name": "BaseBdev2", 00:18:43.826 "uuid": "4b7a27f5-2f9b-5ecd-891a-473333b0891e", 00:18:43.826 "is_configured": true, 00:18:43.826 "data_offset": 256, 00:18:43.826 "data_size": 7936 00:18:43.826 } 00:18:43.826 ] 00:18:43.826 }' 00:18:43.826 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.826 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.396 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:44.396 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:44.396 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:44.397 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:44.397 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:44.397 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.397 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.397 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.397 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.397 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.397 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.397 "name": "raid_bdev1", 00:18:44.397 "uuid": "6e1b4107-53aa-4929-96ea-371e30970cc2", 00:18:44.397 "strip_size_kb": 0, 00:18:44.397 "state": "online", 00:18:44.397 "raid_level": "raid1", 00:18:44.397 "superblock": true, 00:18:44.397 "num_base_bdevs": 2, 00:18:44.397 "num_base_bdevs_discovered": 1, 00:18:44.397 "num_base_bdevs_operational": 1, 00:18:44.397 "base_bdevs_list": [ 00:18:44.397 { 00:18:44.397 "name": null, 00:18:44.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.397 "is_configured": false, 00:18:44.397 "data_offset": 0, 00:18:44.397 "data_size": 7936 00:18:44.397 }, 00:18:44.397 { 00:18:44.397 "name": "BaseBdev2", 00:18:44.397 "uuid": "4b7a27f5-2f9b-5ecd-891a-473333b0891e", 00:18:44.397 "is_configured": true, 00:18:44.397 "data_offset": 256, 00:18:44.397 "data_size": 7936 00:18:44.397 } 00:18:44.397 ] 00:18:44.397 }' 00:18:44.397 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.397 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:44.397 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.397 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:44.397 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:44.397 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:18:44.397 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:44.397 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:44.397 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:44.397 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:44.397 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:44.397 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:44.397 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.656 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.656 [2024-11-15 09:37:32.869452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:44.656 [2024-11-15 09:37:32.869769] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:44.656 [2024-11-15 09:37:32.869833] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:44.656 request: 00:18:44.656 { 00:18:44.656 "base_bdev": "BaseBdev1", 00:18:44.656 "raid_bdev": "raid_bdev1", 00:18:44.656 "method": "bdev_raid_add_base_bdev", 00:18:44.656 "req_id": 1 00:18:44.656 } 00:18:44.656 Got JSON-RPC error response 00:18:44.656 response: 00:18:44.656 { 00:18:44.656 "code": -22, 00:18:44.656 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:44.656 } 00:18:44.656 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:44.656 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:18:44.656 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:44.656 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:44.656 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:44.656 09:37:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:45.593 09:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:45.593 09:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.593 09:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.593 09:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:45.593 09:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:45.593 09:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:45.593 09:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.593 09:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.593 09:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.593 09:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.593 09:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.593 09:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.593 09:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.593 09:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.593 09:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.593 09:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.593 "name": "raid_bdev1", 00:18:45.593 "uuid": "6e1b4107-53aa-4929-96ea-371e30970cc2", 00:18:45.593 "strip_size_kb": 0, 00:18:45.593 "state": "online", 00:18:45.593 "raid_level": "raid1", 00:18:45.593 "superblock": true, 00:18:45.593 "num_base_bdevs": 2, 00:18:45.593 "num_base_bdevs_discovered": 1, 00:18:45.593 "num_base_bdevs_operational": 1, 00:18:45.593 "base_bdevs_list": [ 00:18:45.593 { 00:18:45.593 "name": null, 00:18:45.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.593 "is_configured": false, 00:18:45.593 "data_offset": 0, 00:18:45.593 "data_size": 7936 00:18:45.594 }, 00:18:45.594 { 00:18:45.594 "name": "BaseBdev2", 00:18:45.594 "uuid": "4b7a27f5-2f9b-5ecd-891a-473333b0891e", 00:18:45.594 "is_configured": true, 00:18:45.594 "data_offset": 256, 00:18:45.594 "data_size": 7936 00:18:45.594 } 00:18:45.594 ] 00:18:45.594 }' 00:18:45.594 09:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.594 09:37:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.853 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:45.853 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.853 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:45.853 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:45.853 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.853 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.853 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.853 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.853 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.853 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.112 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:46.112 "name": "raid_bdev1", 00:18:46.112 "uuid": "6e1b4107-53aa-4929-96ea-371e30970cc2", 00:18:46.112 "strip_size_kb": 0, 00:18:46.112 "state": "online", 00:18:46.112 "raid_level": "raid1", 00:18:46.112 "superblock": true, 00:18:46.112 "num_base_bdevs": 2, 00:18:46.112 "num_base_bdevs_discovered": 1, 00:18:46.112 "num_base_bdevs_operational": 1, 00:18:46.112 "base_bdevs_list": [ 00:18:46.112 { 00:18:46.112 "name": null, 00:18:46.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.112 "is_configured": false, 00:18:46.112 "data_offset": 0, 00:18:46.112 "data_size": 7936 00:18:46.112 }, 00:18:46.112 { 00:18:46.112 "name": "BaseBdev2", 00:18:46.112 "uuid": "4b7a27f5-2f9b-5ecd-891a-473333b0891e", 00:18:46.112 "is_configured": true, 00:18:46.112 "data_offset": 256, 00:18:46.112 "data_size": 7936 00:18:46.112 } 00:18:46.112 ] 00:18:46.112 }' 00:18:46.112 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.112 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:46.112 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:46.112 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:46.112 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88214 00:18:46.112 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 88214 ']' 00:18:46.112 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 88214 00:18:46.112 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:18:46.112 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:46.112 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88214 00:18:46.112 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:46.112 killing process with pid 88214 00:18:46.112 Received shutdown signal, test time was about 60.000000 seconds 00:18:46.112 00:18:46.112 Latency(us) 00:18:46.112 [2024-11-15T09:37:34.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.112 [2024-11-15T09:37:34.575Z] =================================================================================================================== 00:18:46.112 [2024-11-15T09:37:34.575Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:46.112 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:46.112 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88214' 00:18:46.112 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 88214 00:18:46.112 09:37:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 88214 00:18:46.112 [2024-11-15 09:37:34.483497] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:46.112 [2024-11-15 09:37:34.483691] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:46.113 [2024-11-15 09:37:34.483751] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:46.113 [2024-11-15 09:37:34.483764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:46.681 [2024-11-15 09:37:34.840928] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:47.617 09:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:47.617 00:18:47.617 real 0m20.157s 00:18:47.617 user 0m26.239s 00:18:47.617 sys 0m2.780s 00:18:47.617 09:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:47.617 09:37:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.617 ************************************ 00:18:47.617 END TEST raid_rebuild_test_sb_md_separate 00:18:47.617 ************************************ 00:18:47.877 09:37:36 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:47.877 09:37:36 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:47.877 09:37:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:47.877 09:37:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:47.877 09:37:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:47.877 ************************************ 00:18:47.877 START TEST raid_state_function_test_sb_md_interleaved 00:18:47.877 ************************************ 00:18:47.877 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:18:47.877 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:47.877 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:47.877 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:47.877 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:47.877 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:47.877 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:47.877 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:47.877 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:47.877 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:47.877 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:47.877 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:47.877 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:47.877 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:47.877 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:47.877 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:47.877 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:47.877 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:47.878 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:47.878 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:47.878 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:47.878 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:47.878 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:47.878 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88907 00:18:47.878 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:47.878 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88907' 00:18:47.878 Process raid pid: 88907 00:18:47.878 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88907 00:18:47.878 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 88907 ']' 00:18:47.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.878 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.878 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:47.878 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.878 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:47.878 09:37:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.878 [2024-11-15 09:37:36.222058] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:18:47.878 [2024-11-15 09:37:36.222282] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.137 [2024-11-15 09:37:36.399831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.137 [2024-11-15 09:37:36.541758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.396 [2024-11-15 09:37:36.794529] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:48.396 [2024-11-15 09:37:36.794731] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:48.655 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:48.655 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:18:48.655 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:48.655 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.655 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.655 [2024-11-15 09:37:37.107679] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:48.655 [2024-11-15 09:37:37.107875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:48.655 [2024-11-15 09:37:37.107912] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:48.655 [2024-11-15 09:37:37.107937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:48.655 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.655 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:48.655 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:48.655 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:48.655 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.655 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.655 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:48.655 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.655 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.655 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.655 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.655 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.655 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.655 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.915 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.915 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.915 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.915 "name": "Existed_Raid", 00:18:48.915 "uuid": "27712860-23f4-4c55-bbe7-525c500d5cd7", 00:18:48.915 "strip_size_kb": 0, 00:18:48.915 "state": "configuring", 00:18:48.915 "raid_level": "raid1", 00:18:48.915 "superblock": true, 00:18:48.915 "num_base_bdevs": 2, 00:18:48.915 "num_base_bdevs_discovered": 0, 00:18:48.915 "num_base_bdevs_operational": 2, 00:18:48.915 "base_bdevs_list": [ 00:18:48.915 { 00:18:48.915 "name": "BaseBdev1", 00:18:48.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.915 "is_configured": false, 00:18:48.915 "data_offset": 0, 00:18:48.915 "data_size": 0 00:18:48.915 }, 00:18:48.915 { 00:18:48.915 "name": "BaseBdev2", 00:18:48.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.915 "is_configured": false, 00:18:48.915 "data_offset": 0, 00:18:48.915 "data_size": 0 00:18:48.915 } 00:18:48.915 ] 00:18:48.915 }' 00:18:48.915 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.915 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.174 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:49.174 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.174 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.174 [2024-11-15 09:37:37.530948] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:49.174 [2024-11-15 09:37:37.531094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:49.174 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.174 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:49.174 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.174 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.174 [2024-11-15 09:37:37.542926] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:49.174 [2024-11-15 09:37:37.543074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:49.174 [2024-11-15 09:37:37.543103] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:49.174 [2024-11-15 09:37:37.543129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:49.174 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.174 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:49.174 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.174 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.174 [2024-11-15 09:37:37.599086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:49.174 BaseBdev1 00:18:49.174 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.174 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:49.174 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:49.174 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:49.174 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:18:49.174 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:49.174 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:49.174 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:49.174 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.174 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.174 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.174 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:49.174 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.174 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.174 [ 00:18:49.174 { 00:18:49.174 "name": "BaseBdev1", 00:18:49.174 "aliases": [ 00:18:49.174 "183447f1-a902-40d7-bcea-933a19926b00" 00:18:49.174 ], 00:18:49.174 "product_name": "Malloc disk", 00:18:49.174 "block_size": 4128, 00:18:49.174 "num_blocks": 8192, 00:18:49.174 "uuid": "183447f1-a902-40d7-bcea-933a19926b00", 00:18:49.174 "md_size": 32, 00:18:49.174 "md_interleave": true, 00:18:49.174 "dif_type": 0, 00:18:49.174 "assigned_rate_limits": { 00:18:49.174 "rw_ios_per_sec": 0, 00:18:49.174 "rw_mbytes_per_sec": 0, 00:18:49.174 "r_mbytes_per_sec": 0, 00:18:49.174 "w_mbytes_per_sec": 0 00:18:49.174 }, 00:18:49.174 "claimed": true, 00:18:49.174 "claim_type": "exclusive_write", 00:18:49.174 "zoned": false, 00:18:49.174 "supported_io_types": { 00:18:49.174 "read": true, 00:18:49.174 "write": true, 00:18:49.174 "unmap": true, 00:18:49.174 "flush": true, 00:18:49.174 "reset": true, 00:18:49.174 "nvme_admin": false, 00:18:49.174 "nvme_io": false, 00:18:49.174 "nvme_io_md": false, 00:18:49.174 "write_zeroes": true, 00:18:49.174 "zcopy": true, 00:18:49.174 "get_zone_info": false, 00:18:49.174 "zone_management": false, 00:18:49.174 "zone_append": false, 00:18:49.174 "compare": false, 00:18:49.174 "compare_and_write": false, 00:18:49.174 "abort": true, 00:18:49.174 "seek_hole": false, 00:18:49.174 "seek_data": false, 00:18:49.174 "copy": true, 00:18:49.174 "nvme_iov_md": false 00:18:49.174 }, 00:18:49.174 "memory_domains": [ 00:18:49.174 { 00:18:49.174 "dma_device_id": "system", 00:18:49.174 "dma_device_type": 1 00:18:49.174 }, 00:18:49.174 { 00:18:49.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.174 "dma_device_type": 2 00:18:49.434 } 00:18:49.434 ], 00:18:49.434 "driver_specific": {} 00:18:49.434 } 00:18:49.434 ] 00:18:49.434 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.434 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:18:49.434 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:49.434 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:49.434 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:49.434 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.434 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.434 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:49.434 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.434 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.434 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.435 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.435 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.435 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.435 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.435 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.435 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.435 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.435 "name": "Existed_Raid", 00:18:49.435 "uuid": "72200247-2d2c-4c0e-83c2-0a3bcaadee27", 00:18:49.435 "strip_size_kb": 0, 00:18:49.435 "state": "configuring", 00:18:49.435 "raid_level": "raid1", 00:18:49.435 "superblock": true, 00:18:49.435 "num_base_bdevs": 2, 00:18:49.435 "num_base_bdevs_discovered": 1, 00:18:49.435 "num_base_bdevs_operational": 2, 00:18:49.435 "base_bdevs_list": [ 00:18:49.435 { 00:18:49.435 "name": "BaseBdev1", 00:18:49.435 "uuid": "183447f1-a902-40d7-bcea-933a19926b00", 00:18:49.435 "is_configured": true, 00:18:49.435 "data_offset": 256, 00:18:49.435 "data_size": 7936 00:18:49.435 }, 00:18:49.435 { 00:18:49.435 "name": "BaseBdev2", 00:18:49.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.435 "is_configured": false, 00:18:49.435 "data_offset": 0, 00:18:49.435 "data_size": 0 00:18:49.435 } 00:18:49.435 ] 00:18:49.435 }' 00:18:49.435 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.435 09:37:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.693 [2024-11-15 09:37:38.098373] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:49.693 [2024-11-15 09:37:38.098466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.693 [2024-11-15 09:37:38.106445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:49.693 [2024-11-15 09:37:38.108727] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:49.693 [2024-11-15 09:37:38.108784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.693 "name": "Existed_Raid", 00:18:49.693 "uuid": "417e161b-a911-4a6f-82af-fe9494101e68", 00:18:49.693 "strip_size_kb": 0, 00:18:49.693 "state": "configuring", 00:18:49.693 "raid_level": "raid1", 00:18:49.693 "superblock": true, 00:18:49.693 "num_base_bdevs": 2, 00:18:49.693 "num_base_bdevs_discovered": 1, 00:18:49.693 "num_base_bdevs_operational": 2, 00:18:49.693 "base_bdevs_list": [ 00:18:49.693 { 00:18:49.693 "name": "BaseBdev1", 00:18:49.693 "uuid": "183447f1-a902-40d7-bcea-933a19926b00", 00:18:49.693 "is_configured": true, 00:18:49.693 "data_offset": 256, 00:18:49.693 "data_size": 7936 00:18:49.693 }, 00:18:49.693 { 00:18:49.693 "name": "BaseBdev2", 00:18:49.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.693 "is_configured": false, 00:18:49.693 "data_offset": 0, 00:18:49.693 "data_size": 0 00:18:49.693 } 00:18:49.693 ] 00:18:49.693 }' 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.693 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.262 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:50.262 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.262 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.262 [2024-11-15 09:37:38.573553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:50.262 [2024-11-15 09:37:38.573938] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:50.262 [2024-11-15 09:37:38.573996] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:50.262 [2024-11-15 09:37:38.574118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:50.262 [2024-11-15 09:37:38.574244] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:50.262 [2024-11-15 09:37:38.574280] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:50.262 [2024-11-15 09:37:38.574382] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.262 BaseBdev2 00:18:50.262 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.262 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:50.262 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:50.262 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:50.262 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:18:50.262 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:50.262 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:50.262 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:50.262 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.262 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.262 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.262 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:50.262 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.262 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.262 [ 00:18:50.262 { 00:18:50.262 "name": "BaseBdev2", 00:18:50.262 "aliases": [ 00:18:50.262 "4266b65b-b04e-4dd2-94f3-66c517663473" 00:18:50.262 ], 00:18:50.262 "product_name": "Malloc disk", 00:18:50.262 "block_size": 4128, 00:18:50.262 "num_blocks": 8192, 00:18:50.262 "uuid": "4266b65b-b04e-4dd2-94f3-66c517663473", 00:18:50.262 "md_size": 32, 00:18:50.262 "md_interleave": true, 00:18:50.262 "dif_type": 0, 00:18:50.262 "assigned_rate_limits": { 00:18:50.262 "rw_ios_per_sec": 0, 00:18:50.262 "rw_mbytes_per_sec": 0, 00:18:50.262 "r_mbytes_per_sec": 0, 00:18:50.262 "w_mbytes_per_sec": 0 00:18:50.262 }, 00:18:50.262 "claimed": true, 00:18:50.262 "claim_type": "exclusive_write", 00:18:50.262 "zoned": false, 00:18:50.262 "supported_io_types": { 00:18:50.262 "read": true, 00:18:50.262 "write": true, 00:18:50.262 "unmap": true, 00:18:50.262 "flush": true, 00:18:50.262 "reset": true, 00:18:50.262 "nvme_admin": false, 00:18:50.262 "nvme_io": false, 00:18:50.262 "nvme_io_md": false, 00:18:50.262 "write_zeroes": true, 00:18:50.262 "zcopy": true, 00:18:50.262 "get_zone_info": false, 00:18:50.262 "zone_management": false, 00:18:50.262 "zone_append": false, 00:18:50.262 "compare": false, 00:18:50.262 "compare_and_write": false, 00:18:50.262 "abort": true, 00:18:50.262 "seek_hole": false, 00:18:50.262 "seek_data": false, 00:18:50.262 "copy": true, 00:18:50.262 "nvme_iov_md": false 00:18:50.262 }, 00:18:50.262 "memory_domains": [ 00:18:50.262 { 00:18:50.262 "dma_device_id": "system", 00:18:50.262 "dma_device_type": 1 00:18:50.262 }, 00:18:50.262 { 00:18:50.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.262 "dma_device_type": 2 00:18:50.262 } 00:18:50.262 ], 00:18:50.262 "driver_specific": {} 00:18:50.262 } 00:18:50.262 ] 00:18:50.263 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.263 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:18:50.263 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:50.263 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:50.263 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:50.263 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:50.263 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.263 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.263 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.263 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:50.263 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.263 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.263 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.263 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.263 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.263 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.263 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.263 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.263 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.263 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.263 "name": "Existed_Raid", 00:18:50.263 "uuid": "417e161b-a911-4a6f-82af-fe9494101e68", 00:18:50.263 "strip_size_kb": 0, 00:18:50.263 "state": "online", 00:18:50.263 "raid_level": "raid1", 00:18:50.263 "superblock": true, 00:18:50.263 "num_base_bdevs": 2, 00:18:50.263 "num_base_bdevs_discovered": 2, 00:18:50.263 "num_base_bdevs_operational": 2, 00:18:50.263 "base_bdevs_list": [ 00:18:50.263 { 00:18:50.263 "name": "BaseBdev1", 00:18:50.263 "uuid": "183447f1-a902-40d7-bcea-933a19926b00", 00:18:50.263 "is_configured": true, 00:18:50.263 "data_offset": 256, 00:18:50.263 "data_size": 7936 00:18:50.263 }, 00:18:50.263 { 00:18:50.263 "name": "BaseBdev2", 00:18:50.263 "uuid": "4266b65b-b04e-4dd2-94f3-66c517663473", 00:18:50.263 "is_configured": true, 00:18:50.263 "data_offset": 256, 00:18:50.263 "data_size": 7936 00:18:50.263 } 00:18:50.263 ] 00:18:50.263 }' 00:18:50.263 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.263 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.523 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:50.523 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:50.523 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:50.523 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:50.523 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:50.523 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:50.523 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:50.523 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:50.523 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.523 09:37:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.523 [2024-11-15 09:37:38.981270] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:50.782 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.782 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:50.782 "name": "Existed_Raid", 00:18:50.782 "aliases": [ 00:18:50.782 "417e161b-a911-4a6f-82af-fe9494101e68" 00:18:50.782 ], 00:18:50.782 "product_name": "Raid Volume", 00:18:50.782 "block_size": 4128, 00:18:50.782 "num_blocks": 7936, 00:18:50.782 "uuid": "417e161b-a911-4a6f-82af-fe9494101e68", 00:18:50.782 "md_size": 32, 00:18:50.782 "md_interleave": true, 00:18:50.782 "dif_type": 0, 00:18:50.782 "assigned_rate_limits": { 00:18:50.782 "rw_ios_per_sec": 0, 00:18:50.782 "rw_mbytes_per_sec": 0, 00:18:50.782 "r_mbytes_per_sec": 0, 00:18:50.782 "w_mbytes_per_sec": 0 00:18:50.782 }, 00:18:50.782 "claimed": false, 00:18:50.782 "zoned": false, 00:18:50.782 "supported_io_types": { 00:18:50.782 "read": true, 00:18:50.782 "write": true, 00:18:50.782 "unmap": false, 00:18:50.782 "flush": false, 00:18:50.782 "reset": true, 00:18:50.782 "nvme_admin": false, 00:18:50.782 "nvme_io": false, 00:18:50.782 "nvme_io_md": false, 00:18:50.782 "write_zeroes": true, 00:18:50.782 "zcopy": false, 00:18:50.782 "get_zone_info": false, 00:18:50.782 "zone_management": false, 00:18:50.782 "zone_append": false, 00:18:50.782 "compare": false, 00:18:50.782 "compare_and_write": false, 00:18:50.782 "abort": false, 00:18:50.782 "seek_hole": false, 00:18:50.782 "seek_data": false, 00:18:50.782 "copy": false, 00:18:50.782 "nvme_iov_md": false 00:18:50.782 }, 00:18:50.782 "memory_domains": [ 00:18:50.782 { 00:18:50.782 "dma_device_id": "system", 00:18:50.782 "dma_device_type": 1 00:18:50.782 }, 00:18:50.782 { 00:18:50.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.783 "dma_device_type": 2 00:18:50.783 }, 00:18:50.783 { 00:18:50.783 "dma_device_id": "system", 00:18:50.783 "dma_device_type": 1 00:18:50.783 }, 00:18:50.783 { 00:18:50.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.783 "dma_device_type": 2 00:18:50.783 } 00:18:50.783 ], 00:18:50.783 "driver_specific": { 00:18:50.783 "raid": { 00:18:50.783 "uuid": "417e161b-a911-4a6f-82af-fe9494101e68", 00:18:50.783 "strip_size_kb": 0, 00:18:50.783 "state": "online", 00:18:50.783 "raid_level": "raid1", 00:18:50.783 "superblock": true, 00:18:50.783 "num_base_bdevs": 2, 00:18:50.783 "num_base_bdevs_discovered": 2, 00:18:50.783 "num_base_bdevs_operational": 2, 00:18:50.783 "base_bdevs_list": [ 00:18:50.783 { 00:18:50.783 "name": "BaseBdev1", 00:18:50.783 "uuid": "183447f1-a902-40d7-bcea-933a19926b00", 00:18:50.783 "is_configured": true, 00:18:50.783 "data_offset": 256, 00:18:50.783 "data_size": 7936 00:18:50.783 }, 00:18:50.783 { 00:18:50.783 "name": "BaseBdev2", 00:18:50.783 "uuid": "4266b65b-b04e-4dd2-94f3-66c517663473", 00:18:50.783 "is_configured": true, 00:18:50.783 "data_offset": 256, 00:18:50.783 "data_size": 7936 00:18:50.783 } 00:18:50.783 ] 00:18:50.783 } 00:18:50.783 } 00:18:50.783 }' 00:18:50.783 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:50.783 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:50.783 BaseBdev2' 00:18:50.783 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:50.783 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:50.783 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:50.783 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:50.783 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.783 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.783 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:50.783 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.783 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:50.783 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:50.783 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:50.783 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:50.783 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.783 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.783 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:50.783 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.783 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:50.783 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:50.783 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:50.783 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.783 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:50.783 [2024-11-15 09:37:39.204572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:51.042 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.042 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:51.042 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:51.042 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:51.042 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:51.042 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:51.042 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:51.042 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:51.042 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:51.042 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:51.042 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:51.042 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:51.042 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:51.042 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:51.042 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:51.042 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:51.042 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.042 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.042 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.042 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.042 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.042 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:51.042 "name": "Existed_Raid", 00:18:51.043 "uuid": "417e161b-a911-4a6f-82af-fe9494101e68", 00:18:51.043 "strip_size_kb": 0, 00:18:51.043 "state": "online", 00:18:51.043 "raid_level": "raid1", 00:18:51.043 "superblock": true, 00:18:51.043 "num_base_bdevs": 2, 00:18:51.043 "num_base_bdevs_discovered": 1, 00:18:51.043 "num_base_bdevs_operational": 1, 00:18:51.043 "base_bdevs_list": [ 00:18:51.043 { 00:18:51.043 "name": null, 00:18:51.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.043 "is_configured": false, 00:18:51.043 "data_offset": 0, 00:18:51.043 "data_size": 7936 00:18:51.043 }, 00:18:51.043 { 00:18:51.043 "name": "BaseBdev2", 00:18:51.043 "uuid": "4266b65b-b04e-4dd2-94f3-66c517663473", 00:18:51.043 "is_configured": true, 00:18:51.043 "data_offset": 256, 00:18:51.043 "data_size": 7936 00:18:51.043 } 00:18:51.043 ] 00:18:51.043 }' 00:18:51.043 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:51.043 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.612 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:51.612 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:51.612 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.612 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:51.612 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.612 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.612 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.612 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:51.612 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:51.612 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:51.612 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.612 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.612 [2024-11-15 09:37:39.856111] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:51.612 [2024-11-15 09:37:39.856307] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:51.612 [2024-11-15 09:37:39.964837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:51.612 [2024-11-15 09:37:39.964915] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:51.612 [2024-11-15 09:37:39.964930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:51.612 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.612 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:51.613 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:51.613 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.613 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.613 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:51.613 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:51.613 09:37:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.613 09:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:51.613 09:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:51.613 09:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:51.613 09:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88907 00:18:51.613 09:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 88907 ']' 00:18:51.613 09:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 88907 00:18:51.613 09:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:18:51.613 09:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:51.613 09:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88907 00:18:51.613 killing process with pid 88907 00:18:51.613 09:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:51.613 09:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:51.613 09:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88907' 00:18:51.613 09:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 88907 00:18:51.613 09:37:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 88907 00:18:51.613 [2024-11-15 09:37:40.063126] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:51.878 [2024-11-15 09:37:40.080751] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:53.268 09:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:53.268 ************************************ 00:18:53.268 END TEST raid_state_function_test_sb_md_interleaved 00:18:53.268 ************************************ 00:18:53.268 00:18:53.268 real 0m5.181s 00:18:53.268 user 0m7.215s 00:18:53.268 sys 0m1.030s 00:18:53.268 09:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:53.268 09:37:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.268 09:37:41 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:53.268 09:37:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:53.268 09:37:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:53.268 09:37:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:53.268 ************************************ 00:18:53.268 START TEST raid_superblock_test_md_interleaved 00:18:53.268 ************************************ 00:18:53.268 09:37:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:18:53.268 09:37:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:53.268 09:37:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:53.268 09:37:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:53.268 09:37:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:53.268 09:37:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:53.268 09:37:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:53.268 09:37:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:53.268 09:37:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:53.268 09:37:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:53.268 09:37:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:53.268 09:37:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:53.268 09:37:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:53.268 09:37:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:53.268 09:37:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:53.268 09:37:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:53.268 09:37:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89159 00:18:53.268 09:37:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:53.268 09:37:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89159 00:18:53.268 09:37:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89159 ']' 00:18:53.268 09:37:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.268 09:37:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:53.268 09:37:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.268 09:37:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:53.268 09:37:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:53.268 [2024-11-15 09:37:41.474601] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:18:53.268 [2024-11-15 09:37:41.474819] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89159 ] 00:18:53.268 [2024-11-15 09:37:41.653708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.528 [2024-11-15 09:37:41.794579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.787 [2024-11-15 09:37:42.027404] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:53.787 [2024-11-15 09:37:42.027464] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:54.046 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.047 malloc1 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.047 [2024-11-15 09:37:42.371362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:54.047 [2024-11-15 09:37:42.371517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.047 [2024-11-15 09:37:42.371581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:54.047 [2024-11-15 09:37:42.371613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.047 [2024-11-15 09:37:42.373792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.047 [2024-11-15 09:37:42.373880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:54.047 pt1 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.047 malloc2 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.047 [2024-11-15 09:37:42.433475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:54.047 [2024-11-15 09:37:42.433561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.047 [2024-11-15 09:37:42.433588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:54.047 [2024-11-15 09:37:42.433598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.047 [2024-11-15 09:37:42.435895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.047 [2024-11-15 09:37:42.435930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:54.047 pt2 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.047 [2024-11-15 09:37:42.445516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:54.047 [2024-11-15 09:37:42.447806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:54.047 [2024-11-15 09:37:42.448106] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:54.047 [2024-11-15 09:37:42.448124] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:54.047 [2024-11-15 09:37:42.448246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:54.047 [2024-11-15 09:37:42.448342] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:54.047 [2024-11-15 09:37:42.448363] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:54.047 [2024-11-15 09:37:42.448459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.047 "name": "raid_bdev1", 00:18:54.047 "uuid": "d3f96bec-e759-4e1d-a4fa-17b57364a589", 00:18:54.047 "strip_size_kb": 0, 00:18:54.047 "state": "online", 00:18:54.047 "raid_level": "raid1", 00:18:54.047 "superblock": true, 00:18:54.047 "num_base_bdevs": 2, 00:18:54.047 "num_base_bdevs_discovered": 2, 00:18:54.047 "num_base_bdevs_operational": 2, 00:18:54.047 "base_bdevs_list": [ 00:18:54.047 { 00:18:54.047 "name": "pt1", 00:18:54.047 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:54.047 "is_configured": true, 00:18:54.047 "data_offset": 256, 00:18:54.047 "data_size": 7936 00:18:54.047 }, 00:18:54.047 { 00:18:54.047 "name": "pt2", 00:18:54.047 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:54.047 "is_configured": true, 00:18:54.047 "data_offset": 256, 00:18:54.047 "data_size": 7936 00:18:54.047 } 00:18:54.047 ] 00:18:54.047 }' 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.047 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.616 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:54.616 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:54.616 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:54.617 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:54.617 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:54.617 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:54.617 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:54.617 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:54.617 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.617 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.617 [2024-11-15 09:37:42.925025] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:54.617 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.617 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:54.617 "name": "raid_bdev1", 00:18:54.617 "aliases": [ 00:18:54.617 "d3f96bec-e759-4e1d-a4fa-17b57364a589" 00:18:54.617 ], 00:18:54.617 "product_name": "Raid Volume", 00:18:54.617 "block_size": 4128, 00:18:54.617 "num_blocks": 7936, 00:18:54.617 "uuid": "d3f96bec-e759-4e1d-a4fa-17b57364a589", 00:18:54.617 "md_size": 32, 00:18:54.617 "md_interleave": true, 00:18:54.617 "dif_type": 0, 00:18:54.617 "assigned_rate_limits": { 00:18:54.617 "rw_ios_per_sec": 0, 00:18:54.617 "rw_mbytes_per_sec": 0, 00:18:54.617 "r_mbytes_per_sec": 0, 00:18:54.617 "w_mbytes_per_sec": 0 00:18:54.617 }, 00:18:54.617 "claimed": false, 00:18:54.617 "zoned": false, 00:18:54.617 "supported_io_types": { 00:18:54.617 "read": true, 00:18:54.617 "write": true, 00:18:54.617 "unmap": false, 00:18:54.617 "flush": false, 00:18:54.617 "reset": true, 00:18:54.617 "nvme_admin": false, 00:18:54.617 "nvme_io": false, 00:18:54.617 "nvme_io_md": false, 00:18:54.617 "write_zeroes": true, 00:18:54.617 "zcopy": false, 00:18:54.617 "get_zone_info": false, 00:18:54.617 "zone_management": false, 00:18:54.617 "zone_append": false, 00:18:54.617 "compare": false, 00:18:54.617 "compare_and_write": false, 00:18:54.617 "abort": false, 00:18:54.617 "seek_hole": false, 00:18:54.617 "seek_data": false, 00:18:54.617 "copy": false, 00:18:54.617 "nvme_iov_md": false 00:18:54.617 }, 00:18:54.617 "memory_domains": [ 00:18:54.617 { 00:18:54.617 "dma_device_id": "system", 00:18:54.617 "dma_device_type": 1 00:18:54.617 }, 00:18:54.617 { 00:18:54.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.617 "dma_device_type": 2 00:18:54.617 }, 00:18:54.617 { 00:18:54.617 "dma_device_id": "system", 00:18:54.617 "dma_device_type": 1 00:18:54.617 }, 00:18:54.617 { 00:18:54.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.617 "dma_device_type": 2 00:18:54.617 } 00:18:54.617 ], 00:18:54.617 "driver_specific": { 00:18:54.617 "raid": { 00:18:54.617 "uuid": "d3f96bec-e759-4e1d-a4fa-17b57364a589", 00:18:54.617 "strip_size_kb": 0, 00:18:54.617 "state": "online", 00:18:54.617 "raid_level": "raid1", 00:18:54.617 "superblock": true, 00:18:54.617 "num_base_bdevs": 2, 00:18:54.617 "num_base_bdevs_discovered": 2, 00:18:54.617 "num_base_bdevs_operational": 2, 00:18:54.617 "base_bdevs_list": [ 00:18:54.617 { 00:18:54.617 "name": "pt1", 00:18:54.617 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:54.617 "is_configured": true, 00:18:54.617 "data_offset": 256, 00:18:54.617 "data_size": 7936 00:18:54.617 }, 00:18:54.617 { 00:18:54.617 "name": "pt2", 00:18:54.617 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:54.617 "is_configured": true, 00:18:54.617 "data_offset": 256, 00:18:54.617 "data_size": 7936 00:18:54.617 } 00:18:54.617 ] 00:18:54.617 } 00:18:54.617 } 00:18:54.617 }' 00:18:54.617 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:54.617 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:54.617 pt2' 00:18:54.617 09:37:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:54.617 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:54.617 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:54.617 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:54.617 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:54.617 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.617 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.617 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.878 [2024-11-15 09:37:43.140602] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d3f96bec-e759-4e1d-a4fa-17b57364a589 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z d3f96bec-e759-4e1d-a4fa-17b57364a589 ']' 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.878 [2024-11-15 09:37:43.184249] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:54.878 [2024-11-15 09:37:43.184281] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:54.878 [2024-11-15 09:37:43.184391] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:54.878 [2024-11-15 09:37:43.184455] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:54.878 [2024-11-15 09:37:43.184469] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.878 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.879 [2024-11-15 09:37:43.323996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:54.879 [2024-11-15 09:37:43.326260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:54.879 [2024-11-15 09:37:43.326396] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:54.879 [2024-11-15 09:37:43.326509] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:54.879 [2024-11-15 09:37:43.326528] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:54.879 [2024-11-15 09:37:43.326540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:54.879 request: 00:18:54.879 { 00:18:54.879 "name": "raid_bdev1", 00:18:54.879 "raid_level": "raid1", 00:18:54.879 "base_bdevs": [ 00:18:54.879 "malloc1", 00:18:54.879 "malloc2" 00:18:54.879 ], 00:18:54.879 "superblock": false, 00:18:54.879 "method": "bdev_raid_create", 00:18:54.879 "req_id": 1 00:18:54.879 } 00:18:54.879 Got JSON-RPC error response 00:18:54.879 response: 00:18:54.879 { 00:18:54.879 "code": -17, 00:18:54.879 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:54.879 } 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:54.879 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:55.139 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.139 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:55.139 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:55.139 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:55.139 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.139 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.139 [2024-11-15 09:37:43.387893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:55.139 [2024-11-15 09:37:43.388018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.139 [2024-11-15 09:37:43.388067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:55.139 [2024-11-15 09:37:43.388112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.139 [2024-11-15 09:37:43.390404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.139 [2024-11-15 09:37:43.390482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:55.139 [2024-11-15 09:37:43.390561] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:55.139 [2024-11-15 09:37:43.390665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:55.139 pt1 00:18:55.139 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.139 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:55.139 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.139 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:55.139 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.139 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.139 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:55.139 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.139 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.139 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.139 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.139 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.139 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.139 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.139 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.139 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.139 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.139 "name": "raid_bdev1", 00:18:55.139 "uuid": "d3f96bec-e759-4e1d-a4fa-17b57364a589", 00:18:55.139 "strip_size_kb": 0, 00:18:55.139 "state": "configuring", 00:18:55.139 "raid_level": "raid1", 00:18:55.139 "superblock": true, 00:18:55.139 "num_base_bdevs": 2, 00:18:55.139 "num_base_bdevs_discovered": 1, 00:18:55.139 "num_base_bdevs_operational": 2, 00:18:55.139 "base_bdevs_list": [ 00:18:55.139 { 00:18:55.139 "name": "pt1", 00:18:55.139 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:55.139 "is_configured": true, 00:18:55.139 "data_offset": 256, 00:18:55.139 "data_size": 7936 00:18:55.139 }, 00:18:55.139 { 00:18:55.139 "name": null, 00:18:55.139 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:55.139 "is_configured": false, 00:18:55.139 "data_offset": 256, 00:18:55.139 "data_size": 7936 00:18:55.139 } 00:18:55.139 ] 00:18:55.139 }' 00:18:55.139 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.139 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.399 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:55.399 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:55.399 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:55.399 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:55.399 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.399 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.399 [2024-11-15 09:37:43.847158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:55.399 [2024-11-15 09:37:43.847283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.399 [2024-11-15 09:37:43.847310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:55.399 [2024-11-15 09:37:43.847323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.399 [2024-11-15 09:37:43.847553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.399 [2024-11-15 09:37:43.847568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:55.399 [2024-11-15 09:37:43.847632] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:55.399 [2024-11-15 09:37:43.847664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:55.399 [2024-11-15 09:37:43.847767] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:55.399 [2024-11-15 09:37:43.847780] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:55.399 [2024-11-15 09:37:43.847855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:55.399 [2024-11-15 09:37:43.847948] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:55.399 [2024-11-15 09:37:43.847957] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:55.399 [2024-11-15 09:37:43.848030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.399 pt2 00:18:55.399 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.399 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:55.399 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:55.399 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:55.399 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.399 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.399 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.399 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.399 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:55.399 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.399 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.399 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.399 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.399 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.399 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.399 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.399 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.658 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.658 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.658 "name": "raid_bdev1", 00:18:55.658 "uuid": "d3f96bec-e759-4e1d-a4fa-17b57364a589", 00:18:55.658 "strip_size_kb": 0, 00:18:55.658 "state": "online", 00:18:55.658 "raid_level": "raid1", 00:18:55.658 "superblock": true, 00:18:55.658 "num_base_bdevs": 2, 00:18:55.658 "num_base_bdevs_discovered": 2, 00:18:55.658 "num_base_bdevs_operational": 2, 00:18:55.658 "base_bdevs_list": [ 00:18:55.658 { 00:18:55.658 "name": "pt1", 00:18:55.658 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:55.658 "is_configured": true, 00:18:55.658 "data_offset": 256, 00:18:55.658 "data_size": 7936 00:18:55.658 }, 00:18:55.658 { 00:18:55.658 "name": "pt2", 00:18:55.658 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:55.658 "is_configured": true, 00:18:55.658 "data_offset": 256, 00:18:55.658 "data_size": 7936 00:18:55.658 } 00:18:55.658 ] 00:18:55.658 }' 00:18:55.658 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.658 09:37:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.917 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:55.917 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:55.917 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:55.917 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:55.917 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:55.917 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:55.917 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:55.917 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:55.917 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.917 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:55.917 [2024-11-15 09:37:44.314750] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:55.917 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.917 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:55.917 "name": "raid_bdev1", 00:18:55.917 "aliases": [ 00:18:55.917 "d3f96bec-e759-4e1d-a4fa-17b57364a589" 00:18:55.917 ], 00:18:55.917 "product_name": "Raid Volume", 00:18:55.917 "block_size": 4128, 00:18:55.917 "num_blocks": 7936, 00:18:55.917 "uuid": "d3f96bec-e759-4e1d-a4fa-17b57364a589", 00:18:55.917 "md_size": 32, 00:18:55.917 "md_interleave": true, 00:18:55.917 "dif_type": 0, 00:18:55.917 "assigned_rate_limits": { 00:18:55.917 "rw_ios_per_sec": 0, 00:18:55.917 "rw_mbytes_per_sec": 0, 00:18:55.917 "r_mbytes_per_sec": 0, 00:18:55.917 "w_mbytes_per_sec": 0 00:18:55.917 }, 00:18:55.917 "claimed": false, 00:18:55.917 "zoned": false, 00:18:55.917 "supported_io_types": { 00:18:55.917 "read": true, 00:18:55.917 "write": true, 00:18:55.917 "unmap": false, 00:18:55.917 "flush": false, 00:18:55.917 "reset": true, 00:18:55.917 "nvme_admin": false, 00:18:55.917 "nvme_io": false, 00:18:55.917 "nvme_io_md": false, 00:18:55.917 "write_zeroes": true, 00:18:55.917 "zcopy": false, 00:18:55.917 "get_zone_info": false, 00:18:55.917 "zone_management": false, 00:18:55.917 "zone_append": false, 00:18:55.917 "compare": false, 00:18:55.917 "compare_and_write": false, 00:18:55.917 "abort": false, 00:18:55.917 "seek_hole": false, 00:18:55.917 "seek_data": false, 00:18:55.917 "copy": false, 00:18:55.917 "nvme_iov_md": false 00:18:55.917 }, 00:18:55.917 "memory_domains": [ 00:18:55.917 { 00:18:55.917 "dma_device_id": "system", 00:18:55.917 "dma_device_type": 1 00:18:55.917 }, 00:18:55.917 { 00:18:55.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.917 "dma_device_type": 2 00:18:55.917 }, 00:18:55.917 { 00:18:55.917 "dma_device_id": "system", 00:18:55.917 "dma_device_type": 1 00:18:55.917 }, 00:18:55.917 { 00:18:55.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.917 "dma_device_type": 2 00:18:55.917 } 00:18:55.918 ], 00:18:55.918 "driver_specific": { 00:18:55.918 "raid": { 00:18:55.918 "uuid": "d3f96bec-e759-4e1d-a4fa-17b57364a589", 00:18:55.918 "strip_size_kb": 0, 00:18:55.918 "state": "online", 00:18:55.918 "raid_level": "raid1", 00:18:55.918 "superblock": true, 00:18:55.918 "num_base_bdevs": 2, 00:18:55.918 "num_base_bdevs_discovered": 2, 00:18:55.918 "num_base_bdevs_operational": 2, 00:18:55.918 "base_bdevs_list": [ 00:18:55.918 { 00:18:55.918 "name": "pt1", 00:18:55.918 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:55.918 "is_configured": true, 00:18:55.918 "data_offset": 256, 00:18:55.918 "data_size": 7936 00:18:55.918 }, 00:18:55.918 { 00:18:55.918 "name": "pt2", 00:18:55.918 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:55.918 "is_configured": true, 00:18:55.918 "data_offset": 256, 00:18:55.918 "data_size": 7936 00:18:55.918 } 00:18:55.918 ] 00:18:55.918 } 00:18:55.918 } 00:18:55.918 }' 00:18:55.918 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:56.178 pt2' 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.178 [2024-11-15 09:37:44.562361] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' d3f96bec-e759-4e1d-a4fa-17b57364a589 '!=' d3f96bec-e759-4e1d-a4fa-17b57364a589 ']' 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.178 [2024-11-15 09:37:44.610126] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.178 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.442 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.442 "name": "raid_bdev1", 00:18:56.442 "uuid": "d3f96bec-e759-4e1d-a4fa-17b57364a589", 00:18:56.442 "strip_size_kb": 0, 00:18:56.442 "state": "online", 00:18:56.442 "raid_level": "raid1", 00:18:56.442 "superblock": true, 00:18:56.442 "num_base_bdevs": 2, 00:18:56.442 "num_base_bdevs_discovered": 1, 00:18:56.442 "num_base_bdevs_operational": 1, 00:18:56.442 "base_bdevs_list": [ 00:18:56.442 { 00:18:56.442 "name": null, 00:18:56.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.442 "is_configured": false, 00:18:56.442 "data_offset": 0, 00:18:56.442 "data_size": 7936 00:18:56.442 }, 00:18:56.442 { 00:18:56.442 "name": "pt2", 00:18:56.442 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:56.442 "is_configured": true, 00:18:56.442 "data_offset": 256, 00:18:56.442 "data_size": 7936 00:18:56.442 } 00:18:56.442 ] 00:18:56.442 }' 00:18:56.443 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.443 09:37:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.708 [2024-11-15 09:37:45.081180] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:56.708 [2024-11-15 09:37:45.081340] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:56.708 [2024-11-15 09:37:45.081476] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:56.708 [2024-11-15 09:37:45.081565] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:56.708 [2024-11-15 09:37:45.081630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.708 [2024-11-15 09:37:45.153106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:56.708 [2024-11-15 09:37:45.153322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:56.708 [2024-11-15 09:37:45.153382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:56.708 [2024-11-15 09:37:45.153424] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:56.708 [2024-11-15 09:37:45.155884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:56.708 [2024-11-15 09:37:45.155987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:56.708 [2024-11-15 09:37:45.156117] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:56.708 [2024-11-15 09:37:45.156219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:56.708 [2024-11-15 09:37:45.156337] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:56.708 [2024-11-15 09:37:45.156376] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:56.708 [2024-11-15 09:37:45.156512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:56.708 [2024-11-15 09:37:45.156625] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:56.708 [2024-11-15 09:37:45.156658] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:56.708 [2024-11-15 09:37:45.156774] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.708 pt2 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:56.708 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.968 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.968 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.968 "name": "raid_bdev1", 00:18:56.968 "uuid": "d3f96bec-e759-4e1d-a4fa-17b57364a589", 00:18:56.968 "strip_size_kb": 0, 00:18:56.968 "state": "online", 00:18:56.968 "raid_level": "raid1", 00:18:56.968 "superblock": true, 00:18:56.968 "num_base_bdevs": 2, 00:18:56.968 "num_base_bdevs_discovered": 1, 00:18:56.968 "num_base_bdevs_operational": 1, 00:18:56.968 "base_bdevs_list": [ 00:18:56.968 { 00:18:56.968 "name": null, 00:18:56.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.968 "is_configured": false, 00:18:56.968 "data_offset": 256, 00:18:56.968 "data_size": 7936 00:18:56.968 }, 00:18:56.968 { 00:18:56.968 "name": "pt2", 00:18:56.968 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:56.968 "is_configured": true, 00:18:56.968 "data_offset": 256, 00:18:56.968 "data_size": 7936 00:18:56.968 } 00:18:56.968 ] 00:18:56.968 }' 00:18:56.968 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.968 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.227 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:57.227 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.227 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.227 [2024-11-15 09:37:45.644253] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:57.227 [2024-11-15 09:37:45.644299] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:57.227 [2024-11-15 09:37:45.644407] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:57.227 [2024-11-15 09:37:45.644471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:57.227 [2024-11-15 09:37:45.644482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:57.227 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.227 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.227 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.227 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.227 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:57.227 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.486 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:57.486 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:57.486 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:57.486 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:57.486 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.486 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.486 [2024-11-15 09:37:45.704307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:57.486 [2024-11-15 09:37:45.704409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.486 [2024-11-15 09:37:45.704438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:57.486 [2024-11-15 09:37:45.704450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.486 [2024-11-15 09:37:45.706828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.486 [2024-11-15 09:37:45.706925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:57.486 [2024-11-15 09:37:45.707010] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:57.486 [2024-11-15 09:37:45.707068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:57.486 [2024-11-15 09:37:45.707186] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:57.486 [2024-11-15 09:37:45.707197] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:57.486 [2024-11-15 09:37:45.707220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:57.486 [2024-11-15 09:37:45.707282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:57.486 [2024-11-15 09:37:45.707360] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:57.486 [2024-11-15 09:37:45.707369] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:57.486 [2024-11-15 09:37:45.707443] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:57.486 [2024-11-15 09:37:45.707510] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:57.486 [2024-11-15 09:37:45.707522] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:57.486 [2024-11-15 09:37:45.707624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:57.486 pt1 00:18:57.486 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.486 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:57.486 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:57.486 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.486 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.486 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.486 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.486 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:57.486 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.486 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.486 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.486 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.486 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.486 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.486 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.486 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.486 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.486 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.486 "name": "raid_bdev1", 00:18:57.486 "uuid": "d3f96bec-e759-4e1d-a4fa-17b57364a589", 00:18:57.486 "strip_size_kb": 0, 00:18:57.486 "state": "online", 00:18:57.486 "raid_level": "raid1", 00:18:57.486 "superblock": true, 00:18:57.486 "num_base_bdevs": 2, 00:18:57.486 "num_base_bdevs_discovered": 1, 00:18:57.486 "num_base_bdevs_operational": 1, 00:18:57.486 "base_bdevs_list": [ 00:18:57.487 { 00:18:57.487 "name": null, 00:18:57.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.487 "is_configured": false, 00:18:57.487 "data_offset": 256, 00:18:57.487 "data_size": 7936 00:18:57.487 }, 00:18:57.487 { 00:18:57.487 "name": "pt2", 00:18:57.487 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:57.487 "is_configured": true, 00:18:57.487 "data_offset": 256, 00:18:57.487 "data_size": 7936 00:18:57.487 } 00:18:57.487 ] 00:18:57.487 }' 00:18:57.487 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.487 09:37:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:57.766 09:37:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:57.766 09:37:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:57.766 09:37:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.766 09:37:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.063 09:37:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.063 09:37:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:58.063 09:37:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:58.063 09:37:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:58.063 09:37:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.063 09:37:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:58.063 [2024-11-15 09:37:46.255627] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:58.063 09:37:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.063 09:37:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' d3f96bec-e759-4e1d-a4fa-17b57364a589 '!=' d3f96bec-e759-4e1d-a4fa-17b57364a589 ']' 00:18:58.063 09:37:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89159 00:18:58.063 09:37:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89159 ']' 00:18:58.063 09:37:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89159 00:18:58.063 09:37:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:18:58.063 09:37:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:58.063 09:37:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89159 00:18:58.063 killing process with pid 89159 00:18:58.063 09:37:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:58.063 09:37:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:58.063 09:37:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89159' 00:18:58.063 09:37:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@971 -- # kill 89159 00:18:58.063 [2024-11-15 09:37:46.334357] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:58.063 09:37:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@976 -- # wait 89159 00:18:58.063 [2024-11-15 09:37:46.334506] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:58.063 [2024-11-15 09:37:46.334569] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:58.063 [2024-11-15 09:37:46.334587] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:58.323 [2024-11-15 09:37:46.567444] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:59.700 09:37:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:59.700 00:18:59.700 real 0m6.427s 00:18:59.700 user 0m9.556s 00:18:59.700 sys 0m1.298s 00:18:59.700 09:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:59.700 09:37:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.700 ************************************ 00:18:59.700 END TEST raid_superblock_test_md_interleaved 00:18:59.700 ************************************ 00:18:59.700 09:37:47 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:59.700 09:37:47 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:18:59.700 09:37:47 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:59.700 09:37:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:59.700 ************************************ 00:18:59.700 START TEST raid_rebuild_test_sb_md_interleaved 00:18:59.700 ************************************ 00:18:59.700 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false false 00:18:59.700 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:59.700 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:59.700 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:59.700 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:59.700 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:59.700 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:59.700 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:59.700 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:59.700 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:59.700 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:59.700 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:59.700 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:59.700 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:59.700 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:59.700 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:59.700 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:59.701 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:59.701 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:59.701 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:59.701 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:59.701 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:59.701 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:59.701 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:59.701 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:59.701 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89482 00:18:59.701 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:59.701 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89482 00:18:59.701 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89482 ']' 00:18:59.701 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.701 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:59.701 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.701 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:59.701 09:37:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:59.701 [2024-11-15 09:37:47.982457] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:18:59.701 [2024-11-15 09:37:47.982687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:59.701 Zero copy mechanism will not be used. 00:18:59.701 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89482 ] 00:18:59.701 [2024-11-15 09:37:48.158776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.960 [2024-11-15 09:37:48.297396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.219 [2024-11-15 09:37:48.575314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:00.219 [2024-11-15 09:37:48.575530] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:00.479 09:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:00.479 09:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:19:00.479 09:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:00.479 09:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:19:00.479 09:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.479 09:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.479 BaseBdev1_malloc 00:19:00.479 09:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.479 09:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:00.479 09:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.479 09:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.479 [2024-11-15 09:37:48.897570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:00.479 [2024-11-15 09:37:48.897784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.479 [2024-11-15 09:37:48.897836] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:00.479 [2024-11-15 09:37:48.897893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.479 [2024-11-15 09:37:48.900540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.479 [2024-11-15 09:37:48.900655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:00.479 BaseBdev1 00:19:00.479 09:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.479 09:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:00.479 09:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:19:00.479 09:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.479 09:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.738 BaseBdev2_malloc 00:19:00.738 09:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.738 09:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:00.738 09:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.738 09:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.738 [2024-11-15 09:37:48.966573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:00.738 [2024-11-15 09:37:48.966687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.738 [2024-11-15 09:37:48.966716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:00.738 [2024-11-15 09:37:48.966734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.738 [2024-11-15 09:37:48.969319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.738 [2024-11-15 09:37:48.969376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:00.738 BaseBdev2 00:19:00.738 09:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.738 09:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:19:00.738 09:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.738 09:37:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.738 spare_malloc 00:19:00.738 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.738 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:00.738 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.738 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.738 spare_delay 00:19:00.738 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.738 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:00.738 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.738 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.738 [2024-11-15 09:37:49.060263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:00.738 [2024-11-15 09:37:49.060374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.738 [2024-11-15 09:37:49.060405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:00.738 [2024-11-15 09:37:49.060422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.739 [2024-11-15 09:37:49.063150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.739 [2024-11-15 09:37:49.063314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:00.739 spare 00:19:00.739 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.739 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:00.739 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.739 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.739 [2024-11-15 09:37:49.072354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:00.739 [2024-11-15 09:37:49.075136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:00.739 [2024-11-15 09:37:49.075440] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:00.739 [2024-11-15 09:37:49.075462] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:00.739 [2024-11-15 09:37:49.075622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:00.739 [2024-11-15 09:37:49.075721] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:00.739 [2024-11-15 09:37:49.075732] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:00.739 [2024-11-15 09:37:49.075867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.739 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.739 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:00.739 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:00.739 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.739 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.739 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.739 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:00.739 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.739 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.739 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.739 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.739 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.739 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.739 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.739 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:00.739 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.739 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.739 "name": "raid_bdev1", 00:19:00.739 "uuid": "bb6ac635-09f5-4bc1-8e47-e754be83e8ba", 00:19:00.739 "strip_size_kb": 0, 00:19:00.739 "state": "online", 00:19:00.739 "raid_level": "raid1", 00:19:00.739 "superblock": true, 00:19:00.739 "num_base_bdevs": 2, 00:19:00.739 "num_base_bdevs_discovered": 2, 00:19:00.739 "num_base_bdevs_operational": 2, 00:19:00.739 "base_bdevs_list": [ 00:19:00.739 { 00:19:00.739 "name": "BaseBdev1", 00:19:00.739 "uuid": "5b810e58-396d-5d04-9ae8-b0dae9add512", 00:19:00.739 "is_configured": true, 00:19:00.739 "data_offset": 256, 00:19:00.739 "data_size": 7936 00:19:00.739 }, 00:19:00.739 { 00:19:00.739 "name": "BaseBdev2", 00:19:00.739 "uuid": "33d0fe10-9709-59b6-973d-2f936c546254", 00:19:00.739 "is_configured": true, 00:19:00.739 "data_offset": 256, 00:19:00.739 "data_size": 7936 00:19:00.739 } 00:19:00.739 ] 00:19:00.739 }' 00:19:00.739 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.739 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.307 [2024-11-15 09:37:49.508200] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.307 [2024-11-15 09:37:49.607656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.307 "name": "raid_bdev1", 00:19:01.307 "uuid": "bb6ac635-09f5-4bc1-8e47-e754be83e8ba", 00:19:01.307 "strip_size_kb": 0, 00:19:01.307 "state": "online", 00:19:01.307 "raid_level": "raid1", 00:19:01.307 "superblock": true, 00:19:01.307 "num_base_bdevs": 2, 00:19:01.307 "num_base_bdevs_discovered": 1, 00:19:01.307 "num_base_bdevs_operational": 1, 00:19:01.307 "base_bdevs_list": [ 00:19:01.307 { 00:19:01.307 "name": null, 00:19:01.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.307 "is_configured": false, 00:19:01.307 "data_offset": 0, 00:19:01.307 "data_size": 7936 00:19:01.307 }, 00:19:01.307 { 00:19:01.307 "name": "BaseBdev2", 00:19:01.307 "uuid": "33d0fe10-9709-59b6-973d-2f936c546254", 00:19:01.307 "is_configured": true, 00:19:01.307 "data_offset": 256, 00:19:01.307 "data_size": 7936 00:19:01.307 } 00:19:01.307 ] 00:19:01.307 }' 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.307 09:37:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.876 09:37:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:01.876 09:37:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.876 09:37:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:01.876 [2024-11-15 09:37:50.070907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:01.876 [2024-11-15 09:37:50.093650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:01.876 09:37:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.876 09:37:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:01.876 [2024-11-15 09:37:50.096309] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:02.815 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:02.815 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:02.815 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:02.816 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:02.816 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:02.816 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.816 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.816 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.816 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.816 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.816 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:02.816 "name": "raid_bdev1", 00:19:02.816 "uuid": "bb6ac635-09f5-4bc1-8e47-e754be83e8ba", 00:19:02.816 "strip_size_kb": 0, 00:19:02.816 "state": "online", 00:19:02.816 "raid_level": "raid1", 00:19:02.816 "superblock": true, 00:19:02.816 "num_base_bdevs": 2, 00:19:02.816 "num_base_bdevs_discovered": 2, 00:19:02.816 "num_base_bdevs_operational": 2, 00:19:02.816 "process": { 00:19:02.816 "type": "rebuild", 00:19:02.816 "target": "spare", 00:19:02.816 "progress": { 00:19:02.816 "blocks": 2560, 00:19:02.816 "percent": 32 00:19:02.816 } 00:19:02.816 }, 00:19:02.816 "base_bdevs_list": [ 00:19:02.816 { 00:19:02.816 "name": "spare", 00:19:02.816 "uuid": "6a29957f-bb45-5a2c-b232-03ab6c63e861", 00:19:02.816 "is_configured": true, 00:19:02.816 "data_offset": 256, 00:19:02.816 "data_size": 7936 00:19:02.816 }, 00:19:02.816 { 00:19:02.816 "name": "BaseBdev2", 00:19:02.816 "uuid": "33d0fe10-9709-59b6-973d-2f936c546254", 00:19:02.816 "is_configured": true, 00:19:02.816 "data_offset": 256, 00:19:02.816 "data_size": 7936 00:19:02.816 } 00:19:02.816 ] 00:19:02.816 }' 00:19:02.816 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:02.816 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:02.816 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:02.816 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:02.816 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:02.816 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.816 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:02.816 [2024-11-15 09:37:51.236280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:03.076 [2024-11-15 09:37:51.306873] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:03.076 [2024-11-15 09:37:51.306978] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.076 [2024-11-15 09:37:51.306999] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:03.076 [2024-11-15 09:37:51.307014] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:03.076 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.076 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:03.076 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.076 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.076 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.076 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.076 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:03.076 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.076 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.076 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.076 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.076 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.076 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.076 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.076 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.076 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.076 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.076 "name": "raid_bdev1", 00:19:03.076 "uuid": "bb6ac635-09f5-4bc1-8e47-e754be83e8ba", 00:19:03.076 "strip_size_kb": 0, 00:19:03.076 "state": "online", 00:19:03.076 "raid_level": "raid1", 00:19:03.076 "superblock": true, 00:19:03.076 "num_base_bdevs": 2, 00:19:03.076 "num_base_bdevs_discovered": 1, 00:19:03.076 "num_base_bdevs_operational": 1, 00:19:03.076 "base_bdevs_list": [ 00:19:03.076 { 00:19:03.076 "name": null, 00:19:03.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.076 "is_configured": false, 00:19:03.076 "data_offset": 0, 00:19:03.076 "data_size": 7936 00:19:03.076 }, 00:19:03.076 { 00:19:03.076 "name": "BaseBdev2", 00:19:03.076 "uuid": "33d0fe10-9709-59b6-973d-2f936c546254", 00:19:03.076 "is_configured": true, 00:19:03.076 "data_offset": 256, 00:19:03.076 "data_size": 7936 00:19:03.076 } 00:19:03.076 ] 00:19:03.076 }' 00:19:03.076 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.076 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.336 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:03.336 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:03.336 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:03.336 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:03.336 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:03.336 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.336 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.336 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.336 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.336 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.595 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:03.595 "name": "raid_bdev1", 00:19:03.595 "uuid": "bb6ac635-09f5-4bc1-8e47-e754be83e8ba", 00:19:03.595 "strip_size_kb": 0, 00:19:03.595 "state": "online", 00:19:03.595 "raid_level": "raid1", 00:19:03.595 "superblock": true, 00:19:03.595 "num_base_bdevs": 2, 00:19:03.595 "num_base_bdevs_discovered": 1, 00:19:03.595 "num_base_bdevs_operational": 1, 00:19:03.595 "base_bdevs_list": [ 00:19:03.595 { 00:19:03.595 "name": null, 00:19:03.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.595 "is_configured": false, 00:19:03.595 "data_offset": 0, 00:19:03.595 "data_size": 7936 00:19:03.595 }, 00:19:03.595 { 00:19:03.595 "name": "BaseBdev2", 00:19:03.595 "uuid": "33d0fe10-9709-59b6-973d-2f936c546254", 00:19:03.595 "is_configured": true, 00:19:03.595 "data_offset": 256, 00:19:03.595 "data_size": 7936 00:19:03.595 } 00:19:03.595 ] 00:19:03.595 }' 00:19:03.595 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.595 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:03.595 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.595 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:03.595 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:03.595 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.595 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:03.595 [2024-11-15 09:37:51.890332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:03.595 [2024-11-15 09:37:51.911620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:03.595 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.596 09:37:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:03.596 [2024-11-15 09:37:51.914140] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:04.537 09:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:04.537 09:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:04.537 09:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:04.537 09:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:04.537 09:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:04.537 09:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.537 09:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.537 09:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.537 09:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.537 09:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.537 09:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:04.537 "name": "raid_bdev1", 00:19:04.537 "uuid": "bb6ac635-09f5-4bc1-8e47-e754be83e8ba", 00:19:04.537 "strip_size_kb": 0, 00:19:04.537 "state": "online", 00:19:04.537 "raid_level": "raid1", 00:19:04.537 "superblock": true, 00:19:04.537 "num_base_bdevs": 2, 00:19:04.537 "num_base_bdevs_discovered": 2, 00:19:04.537 "num_base_bdevs_operational": 2, 00:19:04.537 "process": { 00:19:04.537 "type": "rebuild", 00:19:04.537 "target": "spare", 00:19:04.537 "progress": { 00:19:04.537 "blocks": 2560, 00:19:04.537 "percent": 32 00:19:04.537 } 00:19:04.537 }, 00:19:04.537 "base_bdevs_list": [ 00:19:04.537 { 00:19:04.537 "name": "spare", 00:19:04.537 "uuid": "6a29957f-bb45-5a2c-b232-03ab6c63e861", 00:19:04.537 "is_configured": true, 00:19:04.537 "data_offset": 256, 00:19:04.537 "data_size": 7936 00:19:04.537 }, 00:19:04.537 { 00:19:04.537 "name": "BaseBdev2", 00:19:04.537 "uuid": "33d0fe10-9709-59b6-973d-2f936c546254", 00:19:04.537 "is_configured": true, 00:19:04.537 "data_offset": 256, 00:19:04.537 "data_size": 7936 00:19:04.537 } 00:19:04.537 ] 00:19:04.537 }' 00:19:04.537 09:37:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:04.822 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=767 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:04.822 "name": "raid_bdev1", 00:19:04.822 "uuid": "bb6ac635-09f5-4bc1-8e47-e754be83e8ba", 00:19:04.822 "strip_size_kb": 0, 00:19:04.822 "state": "online", 00:19:04.822 "raid_level": "raid1", 00:19:04.822 "superblock": true, 00:19:04.822 "num_base_bdevs": 2, 00:19:04.822 "num_base_bdevs_discovered": 2, 00:19:04.822 "num_base_bdevs_operational": 2, 00:19:04.822 "process": { 00:19:04.822 "type": "rebuild", 00:19:04.822 "target": "spare", 00:19:04.822 "progress": { 00:19:04.822 "blocks": 2816, 00:19:04.822 "percent": 35 00:19:04.822 } 00:19:04.822 }, 00:19:04.822 "base_bdevs_list": [ 00:19:04.822 { 00:19:04.822 "name": "spare", 00:19:04.822 "uuid": "6a29957f-bb45-5a2c-b232-03ab6c63e861", 00:19:04.822 "is_configured": true, 00:19:04.822 "data_offset": 256, 00:19:04.822 "data_size": 7936 00:19:04.822 }, 00:19:04.822 { 00:19:04.822 "name": "BaseBdev2", 00:19:04.822 "uuid": "33d0fe10-9709-59b6-973d-2f936c546254", 00:19:04.822 "is_configured": true, 00:19:04.822 "data_offset": 256, 00:19:04.822 "data_size": 7936 00:19:04.822 } 00:19:04.822 ] 00:19:04.822 }' 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:04.822 09:37:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:05.759 09:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:05.759 09:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:05.759 09:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:05.759 09:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:05.759 09:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:05.759 09:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:05.759 09:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.759 09:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.759 09:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.759 09:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:05.759 09:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.017 09:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:06.017 "name": "raid_bdev1", 00:19:06.017 "uuid": "bb6ac635-09f5-4bc1-8e47-e754be83e8ba", 00:19:06.017 "strip_size_kb": 0, 00:19:06.017 "state": "online", 00:19:06.017 "raid_level": "raid1", 00:19:06.017 "superblock": true, 00:19:06.017 "num_base_bdevs": 2, 00:19:06.017 "num_base_bdevs_discovered": 2, 00:19:06.017 "num_base_bdevs_operational": 2, 00:19:06.017 "process": { 00:19:06.017 "type": "rebuild", 00:19:06.017 "target": "spare", 00:19:06.017 "progress": { 00:19:06.017 "blocks": 5632, 00:19:06.017 "percent": 70 00:19:06.017 } 00:19:06.017 }, 00:19:06.017 "base_bdevs_list": [ 00:19:06.017 { 00:19:06.017 "name": "spare", 00:19:06.017 "uuid": "6a29957f-bb45-5a2c-b232-03ab6c63e861", 00:19:06.017 "is_configured": true, 00:19:06.017 "data_offset": 256, 00:19:06.017 "data_size": 7936 00:19:06.017 }, 00:19:06.017 { 00:19:06.017 "name": "BaseBdev2", 00:19:06.017 "uuid": "33d0fe10-9709-59b6-973d-2f936c546254", 00:19:06.017 "is_configured": true, 00:19:06.017 "data_offset": 256, 00:19:06.017 "data_size": 7936 00:19:06.017 } 00:19:06.017 ] 00:19:06.017 }' 00:19:06.017 09:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:06.017 09:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:06.017 09:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:06.017 09:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:06.017 09:37:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:06.583 [2024-11-15 09:37:55.037893] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:06.583 [2024-11-15 09:37:55.038057] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:06.583 [2024-11-15 09:37:55.038225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:07.150 "name": "raid_bdev1", 00:19:07.150 "uuid": "bb6ac635-09f5-4bc1-8e47-e754be83e8ba", 00:19:07.150 "strip_size_kb": 0, 00:19:07.150 "state": "online", 00:19:07.150 "raid_level": "raid1", 00:19:07.150 "superblock": true, 00:19:07.150 "num_base_bdevs": 2, 00:19:07.150 "num_base_bdevs_discovered": 2, 00:19:07.150 "num_base_bdevs_operational": 2, 00:19:07.150 "base_bdevs_list": [ 00:19:07.150 { 00:19:07.150 "name": "spare", 00:19:07.150 "uuid": "6a29957f-bb45-5a2c-b232-03ab6c63e861", 00:19:07.150 "is_configured": true, 00:19:07.150 "data_offset": 256, 00:19:07.150 "data_size": 7936 00:19:07.150 }, 00:19:07.150 { 00:19:07.150 "name": "BaseBdev2", 00:19:07.150 "uuid": "33d0fe10-9709-59b6-973d-2f936c546254", 00:19:07.150 "is_configured": true, 00:19:07.150 "data_offset": 256, 00:19:07.150 "data_size": 7936 00:19:07.150 } 00:19:07.150 ] 00:19:07.150 }' 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:07.150 "name": "raid_bdev1", 00:19:07.150 "uuid": "bb6ac635-09f5-4bc1-8e47-e754be83e8ba", 00:19:07.150 "strip_size_kb": 0, 00:19:07.150 "state": "online", 00:19:07.150 "raid_level": "raid1", 00:19:07.150 "superblock": true, 00:19:07.150 "num_base_bdevs": 2, 00:19:07.150 "num_base_bdevs_discovered": 2, 00:19:07.150 "num_base_bdevs_operational": 2, 00:19:07.150 "base_bdevs_list": [ 00:19:07.150 { 00:19:07.150 "name": "spare", 00:19:07.150 "uuid": "6a29957f-bb45-5a2c-b232-03ab6c63e861", 00:19:07.150 "is_configured": true, 00:19:07.150 "data_offset": 256, 00:19:07.150 "data_size": 7936 00:19:07.150 }, 00:19:07.150 { 00:19:07.150 "name": "BaseBdev2", 00:19:07.150 "uuid": "33d0fe10-9709-59b6-973d-2f936c546254", 00:19:07.150 "is_configured": true, 00:19:07.150 "data_offset": 256, 00:19:07.150 "data_size": 7936 00:19:07.150 } 00:19:07.150 ] 00:19:07.150 }' 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:07.150 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:07.409 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:07.409 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:07.409 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.409 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.409 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.409 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.409 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:07.409 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.409 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.409 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.409 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.409 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.409 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.409 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.409 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.409 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.409 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.409 "name": "raid_bdev1", 00:19:07.409 "uuid": "bb6ac635-09f5-4bc1-8e47-e754be83e8ba", 00:19:07.409 "strip_size_kb": 0, 00:19:07.409 "state": "online", 00:19:07.409 "raid_level": "raid1", 00:19:07.409 "superblock": true, 00:19:07.409 "num_base_bdevs": 2, 00:19:07.409 "num_base_bdevs_discovered": 2, 00:19:07.409 "num_base_bdevs_operational": 2, 00:19:07.409 "base_bdevs_list": [ 00:19:07.409 { 00:19:07.409 "name": "spare", 00:19:07.409 "uuid": "6a29957f-bb45-5a2c-b232-03ab6c63e861", 00:19:07.409 "is_configured": true, 00:19:07.409 "data_offset": 256, 00:19:07.409 "data_size": 7936 00:19:07.409 }, 00:19:07.409 { 00:19:07.409 "name": "BaseBdev2", 00:19:07.409 "uuid": "33d0fe10-9709-59b6-973d-2f936c546254", 00:19:07.409 "is_configured": true, 00:19:07.409 "data_offset": 256, 00:19:07.409 "data_size": 7936 00:19:07.409 } 00:19:07.409 ] 00:19:07.409 }' 00:19:07.409 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.409 09:37:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.669 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:07.669 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.669 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.669 [2024-11-15 09:37:56.077577] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:07.669 [2024-11-15 09:37:56.077744] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:07.669 [2024-11-15 09:37:56.077905] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.669 [2024-11-15 09:37:56.078031] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:07.669 [2024-11-15 09:37:56.078045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:07.669 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.669 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.669 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:19:07.669 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.669 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.669 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.669 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:07.669 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:19:07.669 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:07.669 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:07.669 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.669 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.929 [2024-11-15 09:37:56.141414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:07.929 [2024-11-15 09:37:56.141568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.929 [2024-11-15 09:37:56.141603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:07.929 [2024-11-15 09:37:56.141614] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.929 [2024-11-15 09:37:56.144383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.929 [2024-11-15 09:37:56.144424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:07.929 [2024-11-15 09:37:56.144500] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:07.929 [2024-11-15 09:37:56.144586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:07.929 [2024-11-15 09:37:56.144734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:07.929 spare 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.929 [2024-11-15 09:37:56.244752] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:07.929 [2024-11-15 09:37:56.244805] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:07.929 [2024-11-15 09:37:56.245000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:07.929 [2024-11-15 09:37:56.245139] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:07.929 [2024-11-15 09:37:56.245157] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:07.929 [2024-11-15 09:37:56.245304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.929 "name": "raid_bdev1", 00:19:07.929 "uuid": "bb6ac635-09f5-4bc1-8e47-e754be83e8ba", 00:19:07.929 "strip_size_kb": 0, 00:19:07.929 "state": "online", 00:19:07.929 "raid_level": "raid1", 00:19:07.929 "superblock": true, 00:19:07.929 "num_base_bdevs": 2, 00:19:07.929 "num_base_bdevs_discovered": 2, 00:19:07.929 "num_base_bdevs_operational": 2, 00:19:07.929 "base_bdevs_list": [ 00:19:07.929 { 00:19:07.929 "name": "spare", 00:19:07.929 "uuid": "6a29957f-bb45-5a2c-b232-03ab6c63e861", 00:19:07.929 "is_configured": true, 00:19:07.929 "data_offset": 256, 00:19:07.929 "data_size": 7936 00:19:07.929 }, 00:19:07.929 { 00:19:07.929 "name": "BaseBdev2", 00:19:07.929 "uuid": "33d0fe10-9709-59b6-973d-2f936c546254", 00:19:07.929 "is_configured": true, 00:19:07.929 "data_offset": 256, 00:19:07.929 "data_size": 7936 00:19:07.929 } 00:19:07.929 ] 00:19:07.929 }' 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.929 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.189 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:08.189 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:08.189 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:08.189 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:08.189 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:08.189 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.189 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.189 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.189 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:08.448 "name": "raid_bdev1", 00:19:08.448 "uuid": "bb6ac635-09f5-4bc1-8e47-e754be83e8ba", 00:19:08.448 "strip_size_kb": 0, 00:19:08.448 "state": "online", 00:19:08.448 "raid_level": "raid1", 00:19:08.448 "superblock": true, 00:19:08.448 "num_base_bdevs": 2, 00:19:08.448 "num_base_bdevs_discovered": 2, 00:19:08.448 "num_base_bdevs_operational": 2, 00:19:08.448 "base_bdevs_list": [ 00:19:08.448 { 00:19:08.448 "name": "spare", 00:19:08.448 "uuid": "6a29957f-bb45-5a2c-b232-03ab6c63e861", 00:19:08.448 "is_configured": true, 00:19:08.448 "data_offset": 256, 00:19:08.448 "data_size": 7936 00:19:08.448 }, 00:19:08.448 { 00:19:08.448 "name": "BaseBdev2", 00:19:08.448 "uuid": "33d0fe10-9709-59b6-973d-2f936c546254", 00:19:08.448 "is_configured": true, 00:19:08.448 "data_offset": 256, 00:19:08.448 "data_size": 7936 00:19:08.448 } 00:19:08.448 ] 00:19:08.448 }' 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.448 [2024-11-15 09:37:56.800496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.448 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.449 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.449 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.449 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.449 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.449 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.449 "name": "raid_bdev1", 00:19:08.449 "uuid": "bb6ac635-09f5-4bc1-8e47-e754be83e8ba", 00:19:08.449 "strip_size_kb": 0, 00:19:08.449 "state": "online", 00:19:08.449 "raid_level": "raid1", 00:19:08.449 "superblock": true, 00:19:08.449 "num_base_bdevs": 2, 00:19:08.449 "num_base_bdevs_discovered": 1, 00:19:08.449 "num_base_bdevs_operational": 1, 00:19:08.449 "base_bdevs_list": [ 00:19:08.449 { 00:19:08.449 "name": null, 00:19:08.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.449 "is_configured": false, 00:19:08.449 "data_offset": 0, 00:19:08.449 "data_size": 7936 00:19:08.449 }, 00:19:08.449 { 00:19:08.449 "name": "BaseBdev2", 00:19:08.449 "uuid": "33d0fe10-9709-59b6-973d-2f936c546254", 00:19:08.449 "is_configured": true, 00:19:08.449 "data_offset": 256, 00:19:08.449 "data_size": 7936 00:19:08.449 } 00:19:08.449 ] 00:19:08.449 }' 00:19:08.449 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.449 09:37:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.708 09:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:08.708 09:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.708 09:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:08.968 [2024-11-15 09:37:57.176014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:08.968 [2024-11-15 09:37:57.176391] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:08.968 [2024-11-15 09:37:57.176471] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:08.968 [2024-11-15 09:37:57.176551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:08.968 [2024-11-15 09:37:57.194755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:08.968 09:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.968 09:37:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:08.968 [2024-11-15 09:37:57.196923] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:09.907 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:09.907 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.907 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:09.907 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:09.907 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.907 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.907 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.907 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.907 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.907 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.907 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.907 "name": "raid_bdev1", 00:19:09.907 "uuid": "bb6ac635-09f5-4bc1-8e47-e754be83e8ba", 00:19:09.907 "strip_size_kb": 0, 00:19:09.907 "state": "online", 00:19:09.907 "raid_level": "raid1", 00:19:09.907 "superblock": true, 00:19:09.907 "num_base_bdevs": 2, 00:19:09.907 "num_base_bdevs_discovered": 2, 00:19:09.907 "num_base_bdevs_operational": 2, 00:19:09.907 "process": { 00:19:09.907 "type": "rebuild", 00:19:09.907 "target": "spare", 00:19:09.907 "progress": { 00:19:09.907 "blocks": 2560, 00:19:09.907 "percent": 32 00:19:09.907 } 00:19:09.907 }, 00:19:09.907 "base_bdevs_list": [ 00:19:09.907 { 00:19:09.907 "name": "spare", 00:19:09.907 "uuid": "6a29957f-bb45-5a2c-b232-03ab6c63e861", 00:19:09.907 "is_configured": true, 00:19:09.907 "data_offset": 256, 00:19:09.907 "data_size": 7936 00:19:09.907 }, 00:19:09.907 { 00:19:09.907 "name": "BaseBdev2", 00:19:09.907 "uuid": "33d0fe10-9709-59b6-973d-2f936c546254", 00:19:09.907 "is_configured": true, 00:19:09.907 "data_offset": 256, 00:19:09.907 "data_size": 7936 00:19:09.907 } 00:19:09.907 ] 00:19:09.907 }' 00:19:09.907 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.907 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:09.907 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:09.907 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:09.907 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:09.907 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.907 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:09.907 [2024-11-15 09:37:58.360658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:10.166 [2024-11-15 09:37:58.407468] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:10.166 [2024-11-15 09:37:58.407548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:10.166 [2024-11-15 09:37:58.407567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:10.166 [2024-11-15 09:37:58.407578] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:10.166 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.166 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:10.166 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:10.166 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:10.166 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:10.166 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:10.166 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:10.166 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.166 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.166 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.166 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.166 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.166 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.166 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.166 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.166 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.167 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.167 "name": "raid_bdev1", 00:19:10.167 "uuid": "bb6ac635-09f5-4bc1-8e47-e754be83e8ba", 00:19:10.167 "strip_size_kb": 0, 00:19:10.167 "state": "online", 00:19:10.167 "raid_level": "raid1", 00:19:10.167 "superblock": true, 00:19:10.167 "num_base_bdevs": 2, 00:19:10.167 "num_base_bdevs_discovered": 1, 00:19:10.167 "num_base_bdevs_operational": 1, 00:19:10.167 "base_bdevs_list": [ 00:19:10.167 { 00:19:10.167 "name": null, 00:19:10.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.167 "is_configured": false, 00:19:10.167 "data_offset": 0, 00:19:10.167 "data_size": 7936 00:19:10.167 }, 00:19:10.167 { 00:19:10.167 "name": "BaseBdev2", 00:19:10.167 "uuid": "33d0fe10-9709-59b6-973d-2f936c546254", 00:19:10.167 "is_configured": true, 00:19:10.167 "data_offset": 256, 00:19:10.167 "data_size": 7936 00:19:10.167 } 00:19:10.167 ] 00:19:10.167 }' 00:19:10.167 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.167 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.427 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:10.428 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.428 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.428 [2024-11-15 09:37:58.875852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:10.428 [2024-11-15 09:37:58.876056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:10.428 [2024-11-15 09:37:58.876124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:10.428 [2024-11-15 09:37:58.876171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:10.428 [2024-11-15 09:37:58.876459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:10.428 [2024-11-15 09:37:58.876523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:10.428 [2024-11-15 09:37:58.876629] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:10.428 [2024-11-15 09:37:58.876692] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:10.428 [2024-11-15 09:37:58.876754] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:10.428 [2024-11-15 09:37:58.876825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:10.696 [2024-11-15 09:37:58.898019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:10.696 spare 00:19:10.696 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.696 09:37:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:10.696 [2024-11-15 09:37:58.900617] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:11.661 09:37:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:11.661 09:37:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.661 09:37:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:11.661 09:37:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:11.661 09:37:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.661 09:37:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.661 09:37:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.661 09:37:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.661 09:37:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.661 09:37:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.661 09:37:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.661 "name": "raid_bdev1", 00:19:11.661 "uuid": "bb6ac635-09f5-4bc1-8e47-e754be83e8ba", 00:19:11.661 "strip_size_kb": 0, 00:19:11.661 "state": "online", 00:19:11.661 "raid_level": "raid1", 00:19:11.661 "superblock": true, 00:19:11.661 "num_base_bdevs": 2, 00:19:11.661 "num_base_bdevs_discovered": 2, 00:19:11.661 "num_base_bdevs_operational": 2, 00:19:11.661 "process": { 00:19:11.661 "type": "rebuild", 00:19:11.661 "target": "spare", 00:19:11.661 "progress": { 00:19:11.661 "blocks": 2560, 00:19:11.661 "percent": 32 00:19:11.661 } 00:19:11.661 }, 00:19:11.661 "base_bdevs_list": [ 00:19:11.661 { 00:19:11.661 "name": "spare", 00:19:11.661 "uuid": "6a29957f-bb45-5a2c-b232-03ab6c63e861", 00:19:11.661 "is_configured": true, 00:19:11.661 "data_offset": 256, 00:19:11.661 "data_size": 7936 00:19:11.661 }, 00:19:11.661 { 00:19:11.662 "name": "BaseBdev2", 00:19:11.662 "uuid": "33d0fe10-9709-59b6-973d-2f936c546254", 00:19:11.662 "is_configured": true, 00:19:11.662 "data_offset": 256, 00:19:11.662 "data_size": 7936 00:19:11.662 } 00:19:11.662 ] 00:19:11.662 }' 00:19:11.662 09:37:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.662 09:37:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:11.662 09:37:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.662 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:11.662 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:11.662 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.662 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.662 [2024-11-15 09:38:00.020979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:11.662 [2024-11-15 09:38:00.110617] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:11.662 [2024-11-15 09:38:00.110778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.662 [2024-11-15 09:38:00.110801] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:11.662 [2024-11-15 09:38:00.110810] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:11.920 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.920 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:11.920 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.920 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.920 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.920 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.920 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:11.920 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.920 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.920 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.920 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.920 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.920 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.920 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.920 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.920 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.920 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.920 "name": "raid_bdev1", 00:19:11.920 "uuid": "bb6ac635-09f5-4bc1-8e47-e754be83e8ba", 00:19:11.920 "strip_size_kb": 0, 00:19:11.920 "state": "online", 00:19:11.920 "raid_level": "raid1", 00:19:11.920 "superblock": true, 00:19:11.920 "num_base_bdevs": 2, 00:19:11.920 "num_base_bdevs_discovered": 1, 00:19:11.920 "num_base_bdevs_operational": 1, 00:19:11.920 "base_bdevs_list": [ 00:19:11.920 { 00:19:11.920 "name": null, 00:19:11.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.920 "is_configured": false, 00:19:11.920 "data_offset": 0, 00:19:11.920 "data_size": 7936 00:19:11.920 }, 00:19:11.920 { 00:19:11.920 "name": "BaseBdev2", 00:19:11.920 "uuid": "33d0fe10-9709-59b6-973d-2f936c546254", 00:19:11.920 "is_configured": true, 00:19:11.920 "data_offset": 256, 00:19:11.920 "data_size": 7936 00:19:11.920 } 00:19:11.920 ] 00:19:11.920 }' 00:19:11.920 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.920 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.180 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:12.180 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.180 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:12.180 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:12.180 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:12.180 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.180 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.180 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.180 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.180 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.180 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:12.180 "name": "raid_bdev1", 00:19:12.180 "uuid": "bb6ac635-09f5-4bc1-8e47-e754be83e8ba", 00:19:12.180 "strip_size_kb": 0, 00:19:12.180 "state": "online", 00:19:12.180 "raid_level": "raid1", 00:19:12.180 "superblock": true, 00:19:12.180 "num_base_bdevs": 2, 00:19:12.180 "num_base_bdevs_discovered": 1, 00:19:12.180 "num_base_bdevs_operational": 1, 00:19:12.180 "base_bdevs_list": [ 00:19:12.180 { 00:19:12.180 "name": null, 00:19:12.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.180 "is_configured": false, 00:19:12.180 "data_offset": 0, 00:19:12.180 "data_size": 7936 00:19:12.180 }, 00:19:12.180 { 00:19:12.180 "name": "BaseBdev2", 00:19:12.180 "uuid": "33d0fe10-9709-59b6-973d-2f936c546254", 00:19:12.180 "is_configured": true, 00:19:12.180 "data_offset": 256, 00:19:12.180 "data_size": 7936 00:19:12.180 } 00:19:12.180 ] 00:19:12.180 }' 00:19:12.180 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:12.439 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:12.439 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:12.439 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:12.439 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:12.439 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.439 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.439 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.439 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:12.439 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.439 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.439 [2024-11-15 09:38:00.730873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:12.439 [2024-11-15 09:38:00.730964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:12.439 [2024-11-15 09:38:00.730992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:12.439 [2024-11-15 09:38:00.731002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:12.439 [2024-11-15 09:38:00.731219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:12.439 [2024-11-15 09:38:00.731230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:12.440 [2024-11-15 09:38:00.731298] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:12.440 [2024-11-15 09:38:00.731313] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:12.440 [2024-11-15 09:38:00.731324] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:12.440 [2024-11-15 09:38:00.731336] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:12.440 BaseBdev1 00:19:12.440 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.440 09:38:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:13.377 09:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:13.377 09:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:13.377 09:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:13.378 09:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:13.378 09:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:13.378 09:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:13.378 09:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.378 09:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.378 09:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.378 09:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.378 09:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.378 09:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.378 09:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.378 09:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.378 09:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.378 09:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.378 "name": "raid_bdev1", 00:19:13.378 "uuid": "bb6ac635-09f5-4bc1-8e47-e754be83e8ba", 00:19:13.378 "strip_size_kb": 0, 00:19:13.378 "state": "online", 00:19:13.378 "raid_level": "raid1", 00:19:13.378 "superblock": true, 00:19:13.378 "num_base_bdevs": 2, 00:19:13.378 "num_base_bdevs_discovered": 1, 00:19:13.378 "num_base_bdevs_operational": 1, 00:19:13.378 "base_bdevs_list": [ 00:19:13.378 { 00:19:13.378 "name": null, 00:19:13.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.378 "is_configured": false, 00:19:13.378 "data_offset": 0, 00:19:13.378 "data_size": 7936 00:19:13.378 }, 00:19:13.378 { 00:19:13.378 "name": "BaseBdev2", 00:19:13.378 "uuid": "33d0fe10-9709-59b6-973d-2f936c546254", 00:19:13.378 "is_configured": true, 00:19:13.378 "data_offset": 256, 00:19:13.378 "data_size": 7936 00:19:13.378 } 00:19:13.378 ] 00:19:13.378 }' 00:19:13.378 09:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.378 09:38:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.946 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:13.946 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.946 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:13.946 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:13.946 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.946 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.946 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.946 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.946 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.946 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.946 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.946 "name": "raid_bdev1", 00:19:13.946 "uuid": "bb6ac635-09f5-4bc1-8e47-e754be83e8ba", 00:19:13.946 "strip_size_kb": 0, 00:19:13.946 "state": "online", 00:19:13.946 "raid_level": "raid1", 00:19:13.946 "superblock": true, 00:19:13.946 "num_base_bdevs": 2, 00:19:13.946 "num_base_bdevs_discovered": 1, 00:19:13.946 "num_base_bdevs_operational": 1, 00:19:13.946 "base_bdevs_list": [ 00:19:13.947 { 00:19:13.947 "name": null, 00:19:13.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.947 "is_configured": false, 00:19:13.947 "data_offset": 0, 00:19:13.947 "data_size": 7936 00:19:13.947 }, 00:19:13.947 { 00:19:13.947 "name": "BaseBdev2", 00:19:13.947 "uuid": "33d0fe10-9709-59b6-973d-2f936c546254", 00:19:13.947 "is_configured": true, 00:19:13.947 "data_offset": 256, 00:19:13.947 "data_size": 7936 00:19:13.947 } 00:19:13.947 ] 00:19:13.947 }' 00:19:13.947 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.947 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:13.947 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.947 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:13.947 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:13.947 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:19:13.947 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:13.947 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:13.947 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.947 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:13.947 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.947 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:13.947 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.947 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.947 [2024-11-15 09:38:02.364109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:13.947 [2024-11-15 09:38:02.364311] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:13.947 [2024-11-15 09:38:02.364330] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:13.947 request: 00:19:13.947 { 00:19:13.947 "base_bdev": "BaseBdev1", 00:19:13.947 "raid_bdev": "raid_bdev1", 00:19:13.947 "method": "bdev_raid_add_base_bdev", 00:19:13.947 "req_id": 1 00:19:13.947 } 00:19:13.947 Got JSON-RPC error response 00:19:13.947 response: 00:19:13.947 { 00:19:13.947 "code": -22, 00:19:13.947 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:13.947 } 00:19:13.947 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:13.947 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:19:13.947 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:13.947 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:13.947 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:13.947 09:38:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:15.328 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:15.328 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.328 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.328 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:15.328 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:15.328 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:15.328 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.328 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.328 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.328 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.328 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.328 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.328 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.328 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.328 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.328 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.328 "name": "raid_bdev1", 00:19:15.328 "uuid": "bb6ac635-09f5-4bc1-8e47-e754be83e8ba", 00:19:15.328 "strip_size_kb": 0, 00:19:15.328 "state": "online", 00:19:15.328 "raid_level": "raid1", 00:19:15.328 "superblock": true, 00:19:15.328 "num_base_bdevs": 2, 00:19:15.328 "num_base_bdevs_discovered": 1, 00:19:15.328 "num_base_bdevs_operational": 1, 00:19:15.328 "base_bdevs_list": [ 00:19:15.328 { 00:19:15.328 "name": null, 00:19:15.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.328 "is_configured": false, 00:19:15.328 "data_offset": 0, 00:19:15.328 "data_size": 7936 00:19:15.328 }, 00:19:15.328 { 00:19:15.328 "name": "BaseBdev2", 00:19:15.328 "uuid": "33d0fe10-9709-59b6-973d-2f936c546254", 00:19:15.328 "is_configured": true, 00:19:15.328 "data_offset": 256, 00:19:15.328 "data_size": 7936 00:19:15.328 } 00:19:15.328 ] 00:19:15.328 }' 00:19:15.328 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.328 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.587 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:15.588 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.588 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:15.588 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:15.588 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.588 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.588 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.588 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.588 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.588 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.588 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.588 "name": "raid_bdev1", 00:19:15.588 "uuid": "bb6ac635-09f5-4bc1-8e47-e754be83e8ba", 00:19:15.588 "strip_size_kb": 0, 00:19:15.588 "state": "online", 00:19:15.588 "raid_level": "raid1", 00:19:15.588 "superblock": true, 00:19:15.588 "num_base_bdevs": 2, 00:19:15.588 "num_base_bdevs_discovered": 1, 00:19:15.588 "num_base_bdevs_operational": 1, 00:19:15.588 "base_bdevs_list": [ 00:19:15.588 { 00:19:15.588 "name": null, 00:19:15.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.588 "is_configured": false, 00:19:15.588 "data_offset": 0, 00:19:15.588 "data_size": 7936 00:19:15.588 }, 00:19:15.588 { 00:19:15.588 "name": "BaseBdev2", 00:19:15.588 "uuid": "33d0fe10-9709-59b6-973d-2f936c546254", 00:19:15.588 "is_configured": true, 00:19:15.588 "data_offset": 256, 00:19:15.588 "data_size": 7936 00:19:15.588 } 00:19:15.588 ] 00:19:15.588 }' 00:19:15.588 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.588 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:15.588 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.588 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:15.588 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89482 00:19:15.588 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89482 ']' 00:19:15.588 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89482 00:19:15.588 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:19:15.588 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:15.588 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89482 00:19:15.588 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:15.588 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:15.588 killing process with pid 89482 00:19:15.588 Received shutdown signal, test time was about 60.000000 seconds 00:19:15.588 00:19:15.588 Latency(us) 00:19:15.588 [2024-11-15T09:38:04.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.588 [2024-11-15T09:38:04.051Z] =================================================================================================================== 00:19:15.588 [2024-11-15T09:38:04.051Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:15.588 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89482' 00:19:15.588 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 89482 00:19:15.588 [2024-11-15 09:38:03.990394] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:15.588 09:38:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 89482 00:19:15.588 [2024-11-15 09:38:03.990549] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:15.588 [2024-11-15 09:38:03.990604] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:15.588 [2024-11-15 09:38:03.990616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:16.157 [2024-11-15 09:38:04.327891] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:17.103 09:38:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:19:17.103 00:19:17.103 real 0m17.667s 00:19:17.103 user 0m22.788s 00:19:17.103 sys 0m1.775s 00:19:17.103 09:38:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:17.103 09:38:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.103 ************************************ 00:19:17.103 END TEST raid_rebuild_test_sb_md_interleaved 00:19:17.103 ************************************ 00:19:17.363 09:38:05 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:19:17.363 09:38:05 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:19:17.363 09:38:05 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89482 ']' 00:19:17.363 09:38:05 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89482 00:19:17.363 09:38:05 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:19:17.363 00:19:17.363 real 12m28.716s 00:19:17.363 user 16m43.442s 00:19:17.363 sys 2m3.349s 00:19:17.363 09:38:05 bdev_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:17.363 ************************************ 00:19:17.363 END TEST bdev_raid 00:19:17.363 ************************************ 00:19:17.363 09:38:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:17.363 09:38:05 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:17.363 09:38:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:19:17.363 09:38:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:17.363 09:38:05 -- common/autotest_common.sh@10 -- # set +x 00:19:17.363 ************************************ 00:19:17.363 START TEST spdkcli_raid 00:19:17.363 ************************************ 00:19:17.363 09:38:05 spdkcli_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:17.363 * Looking for test storage... 00:19:17.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:17.624 09:38:05 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:17.624 09:38:05 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:19:17.624 09:38:05 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:17.624 09:38:05 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:17.624 09:38:05 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:19:17.624 09:38:05 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:17.624 09:38:05 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:17.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.624 --rc genhtml_branch_coverage=1 00:19:17.624 --rc genhtml_function_coverage=1 00:19:17.624 --rc genhtml_legend=1 00:19:17.624 --rc geninfo_all_blocks=1 00:19:17.624 --rc geninfo_unexecuted_blocks=1 00:19:17.624 00:19:17.624 ' 00:19:17.624 09:38:05 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:17.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.624 --rc genhtml_branch_coverage=1 00:19:17.624 --rc genhtml_function_coverage=1 00:19:17.624 --rc genhtml_legend=1 00:19:17.624 --rc geninfo_all_blocks=1 00:19:17.624 --rc geninfo_unexecuted_blocks=1 00:19:17.624 00:19:17.624 ' 00:19:17.624 09:38:05 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:17.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.624 --rc genhtml_branch_coverage=1 00:19:17.624 --rc genhtml_function_coverage=1 00:19:17.624 --rc genhtml_legend=1 00:19:17.624 --rc geninfo_all_blocks=1 00:19:17.624 --rc geninfo_unexecuted_blocks=1 00:19:17.624 00:19:17.624 ' 00:19:17.624 09:38:05 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:17.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.624 --rc genhtml_branch_coverage=1 00:19:17.624 --rc genhtml_function_coverage=1 00:19:17.624 --rc genhtml_legend=1 00:19:17.624 --rc geninfo_all_blocks=1 00:19:17.624 --rc geninfo_unexecuted_blocks=1 00:19:17.624 00:19:17.624 ' 00:19:17.624 09:38:05 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:17.624 09:38:05 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:17.624 09:38:05 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:17.624 09:38:05 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:17.624 09:38:05 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:17.624 09:38:05 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:17.624 09:38:05 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:17.624 09:38:05 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:17.624 09:38:05 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:17.624 09:38:05 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:17.624 09:38:05 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:17.624 09:38:05 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:17.624 09:38:05 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:17.624 09:38:05 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:17.624 09:38:05 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:17.624 09:38:05 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:17.624 09:38:05 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:17.624 09:38:05 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:17.624 09:38:05 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:17.624 09:38:05 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:17.624 09:38:05 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:17.624 09:38:05 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:17.624 09:38:05 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:17.624 09:38:05 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:19:17.624 09:38:05 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:19:17.624 09:38:05 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:17.624 09:38:05 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:17.624 09:38:05 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:17.625 09:38:05 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:17.625 09:38:05 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:17.625 09:38:05 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:17.625 09:38:05 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:19:17.625 09:38:05 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:19:17.625 09:38:05 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:17.625 09:38:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:17.625 09:38:05 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:19:17.625 09:38:05 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90164 00:19:17.625 09:38:05 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:17.625 09:38:05 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90164 00:19:17.625 09:38:05 spdkcli_raid -- common/autotest_common.sh@833 -- # '[' -z 90164 ']' 00:19:17.625 09:38:05 spdkcli_raid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.625 09:38:05 spdkcli_raid -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:17.625 09:38:05 spdkcli_raid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.625 09:38:05 spdkcli_raid -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:17.625 09:38:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:17.625 [2024-11-15 09:38:06.065824] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:19:17.625 [2024-11-15 09:38:06.066427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90164 ] 00:19:17.885 [2024-11-15 09:38:06.242242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:18.144 [2024-11-15 09:38:06.391842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.144 [2024-11-15 09:38:06.391911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.082 09:38:07 spdkcli_raid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:19.082 09:38:07 spdkcli_raid -- common/autotest_common.sh@866 -- # return 0 00:19:19.082 09:38:07 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:19:19.082 09:38:07 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:19.082 09:38:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:19.082 09:38:07 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:19:19.082 09:38:07 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:19.082 09:38:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:19.082 09:38:07 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:19:19.082 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:19:19.082 ' 00:19:20.990 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:19:20.990 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:19:20.990 09:38:09 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:19:20.990 09:38:09 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:20.990 09:38:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:20.990 09:38:09 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:19:20.990 09:38:09 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:20.990 09:38:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:20.990 09:38:09 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:19:20.990 ' 00:19:21.928 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:19:21.928 09:38:10 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:19:21.928 09:38:10 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:21.928 09:38:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:21.928 09:38:10 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:19:21.928 09:38:10 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:21.928 09:38:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:21.928 09:38:10 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:19:21.928 09:38:10 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:19:22.498 09:38:10 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:19:22.498 09:38:10 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:19:22.498 09:38:10 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:19:22.498 09:38:10 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:22.498 09:38:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:22.758 09:38:10 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:19:22.758 09:38:10 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:22.758 09:38:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:22.758 09:38:10 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:19:22.758 ' 00:19:23.698 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:19:23.698 09:38:12 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:19:23.698 09:38:12 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:23.698 09:38:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:23.698 09:38:12 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:19:23.698 09:38:12 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:23.698 09:38:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:23.698 09:38:12 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:19:23.698 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:19:23.698 ' 00:19:25.078 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:19:25.078 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:19:25.338 09:38:13 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:19:25.338 09:38:13 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:25.338 09:38:13 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:25.338 09:38:13 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90164 00:19:25.338 09:38:13 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 90164 ']' 00:19:25.338 09:38:13 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 90164 00:19:25.338 09:38:13 spdkcli_raid -- common/autotest_common.sh@957 -- # uname 00:19:25.338 09:38:13 spdkcli_raid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:25.338 09:38:13 spdkcli_raid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90164 00:19:25.338 09:38:13 spdkcli_raid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:25.338 09:38:13 spdkcli_raid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:25.338 09:38:13 spdkcli_raid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90164' 00:19:25.338 killing process with pid 90164 00:19:25.338 09:38:13 spdkcli_raid -- common/autotest_common.sh@971 -- # kill 90164 00:19:25.338 09:38:13 spdkcli_raid -- common/autotest_common.sh@976 -- # wait 90164 00:19:27.880 09:38:16 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:19:27.880 09:38:16 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90164 ']' 00:19:27.880 09:38:16 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90164 00:19:27.880 09:38:16 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 90164 ']' 00:19:27.880 09:38:16 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 90164 00:19:27.880 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (90164) - No such process 00:19:27.880 09:38:16 spdkcli_raid -- common/autotest_common.sh@979 -- # echo 'Process with pid 90164 is not found' 00:19:27.880 Process with pid 90164 is not found 00:19:27.880 09:38:16 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:19:27.880 09:38:16 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:19:27.880 09:38:16 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:19:27.880 09:38:16 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:19:27.880 00:19:27.880 real 0m10.563s 00:19:27.880 user 0m21.547s 00:19:27.880 sys 0m1.356s 00:19:27.880 09:38:16 spdkcli_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:27.880 ************************************ 00:19:27.880 END TEST spdkcli_raid 00:19:27.880 ************************************ 00:19:27.880 09:38:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:27.880 09:38:16 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:27.880 09:38:16 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:27.880 09:38:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:27.880 09:38:16 -- common/autotest_common.sh@10 -- # set +x 00:19:28.140 ************************************ 00:19:28.140 START TEST blockdev_raid5f 00:19:28.140 ************************************ 00:19:28.140 09:38:16 blockdev_raid5f -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:28.140 * Looking for test storage... 00:19:28.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:28.140 09:38:16 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:28.140 09:38:16 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:19:28.140 09:38:16 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:28.140 09:38:16 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:28.140 09:38:16 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:19:28.140 09:38:16 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:28.140 09:38:16 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:28.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.140 --rc genhtml_branch_coverage=1 00:19:28.140 --rc genhtml_function_coverage=1 00:19:28.140 --rc genhtml_legend=1 00:19:28.140 --rc geninfo_all_blocks=1 00:19:28.140 --rc geninfo_unexecuted_blocks=1 00:19:28.140 00:19:28.140 ' 00:19:28.140 09:38:16 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:28.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.140 --rc genhtml_branch_coverage=1 00:19:28.140 --rc genhtml_function_coverage=1 00:19:28.140 --rc genhtml_legend=1 00:19:28.140 --rc geninfo_all_blocks=1 00:19:28.140 --rc geninfo_unexecuted_blocks=1 00:19:28.140 00:19:28.140 ' 00:19:28.140 09:38:16 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:28.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.141 --rc genhtml_branch_coverage=1 00:19:28.141 --rc genhtml_function_coverage=1 00:19:28.141 --rc genhtml_legend=1 00:19:28.141 --rc geninfo_all_blocks=1 00:19:28.141 --rc geninfo_unexecuted_blocks=1 00:19:28.141 00:19:28.141 ' 00:19:28.141 09:38:16 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:28.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.141 --rc genhtml_branch_coverage=1 00:19:28.141 --rc genhtml_function_coverage=1 00:19:28.141 --rc genhtml_legend=1 00:19:28.141 --rc geninfo_all_blocks=1 00:19:28.141 --rc geninfo_unexecuted_blocks=1 00:19:28.141 00:19:28.141 ' 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90446 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:28.141 09:38:16 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90446 00:19:28.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.141 09:38:16 blockdev_raid5f -- common/autotest_common.sh@833 -- # '[' -z 90446 ']' 00:19:28.141 09:38:16 blockdev_raid5f -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.141 09:38:16 blockdev_raid5f -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:28.141 09:38:16 blockdev_raid5f -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.141 09:38:16 blockdev_raid5f -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:28.141 09:38:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:28.401 [2024-11-15 09:38:16.698113] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:19:28.401 [2024-11-15 09:38:16.698242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90446 ] 00:19:28.661 [2024-11-15 09:38:16.869474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.661 [2024-11-15 09:38:17.003023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.600 09:38:17 blockdev_raid5f -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:29.600 09:38:17 blockdev_raid5f -- common/autotest_common.sh@866 -- # return 0 00:19:29.600 09:38:17 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:19:29.600 09:38:17 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:19:29.600 09:38:17 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:29.600 09:38:17 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.600 09:38:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:29.600 Malloc0 00:19:29.860 Malloc1 00:19:29.860 Malloc2 00:19:29.860 09:38:18 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.860 09:38:18 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:19:29.860 09:38:18 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.860 09:38:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:29.860 09:38:18 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.860 09:38:18 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:19:29.860 09:38:18 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:19:29.860 09:38:18 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.860 09:38:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:29.860 09:38:18 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.860 09:38:18 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:19:29.860 09:38:18 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.860 09:38:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:29.860 09:38:18 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.860 09:38:18 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:29.860 09:38:18 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.860 09:38:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:29.860 09:38:18 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.860 09:38:18 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:19:29.860 09:38:18 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:19:29.860 09:38:18 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:19:29.860 09:38:18 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.860 09:38:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:29.860 09:38:18 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.860 09:38:18 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:19:29.860 09:38:18 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:19:29.861 09:38:18 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "50240bc1-98a9-4856-be69-c57a8cf5096e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "50240bc1-98a9-4856-be69-c57a8cf5096e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "50240bc1-98a9-4856-be69-c57a8cf5096e",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "1628e121-c62d-44e9-83d5-4f71b498b5db",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "efb198c1-0e94-43a4-bc0c-d4a441d77e1b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "7fb033fc-96b2-4b48-80aa-f32aa10356cc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:30.120 09:38:18 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:19:30.120 09:38:18 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:19:30.120 09:38:18 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:19:30.120 09:38:18 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90446 00:19:30.120 09:38:18 blockdev_raid5f -- common/autotest_common.sh@952 -- # '[' -z 90446 ']' 00:19:30.120 09:38:18 blockdev_raid5f -- common/autotest_common.sh@956 -- # kill -0 90446 00:19:30.120 09:38:18 blockdev_raid5f -- common/autotest_common.sh@957 -- # uname 00:19:30.120 09:38:18 blockdev_raid5f -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:30.120 09:38:18 blockdev_raid5f -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90446 00:19:30.120 killing process with pid 90446 00:19:30.120 09:38:18 blockdev_raid5f -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:30.120 09:38:18 blockdev_raid5f -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:30.120 09:38:18 blockdev_raid5f -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90446' 00:19:30.121 09:38:18 blockdev_raid5f -- common/autotest_common.sh@971 -- # kill 90446 00:19:30.121 09:38:18 blockdev_raid5f -- common/autotest_common.sh@976 -- # wait 90446 00:19:33.449 09:38:21 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:33.449 09:38:21 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:33.449 09:38:21 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:33.449 09:38:21 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:33.449 09:38:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:33.449 ************************************ 00:19:33.449 START TEST bdev_hello_world 00:19:33.449 ************************************ 00:19:33.449 09:38:21 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:33.449 [2024-11-15 09:38:21.311942] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:19:33.449 [2024-11-15 09:38:21.312053] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90513 ] 00:19:33.449 [2024-11-15 09:38:21.488836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.449 [2024-11-15 09:38:21.624119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.019 [2024-11-15 09:38:22.236995] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:34.019 [2024-11-15 09:38:22.237063] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:34.019 [2024-11-15 09:38:22.237096] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:34.019 [2024-11-15 09:38:22.237686] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:34.019 [2024-11-15 09:38:22.237850] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:34.019 [2024-11-15 09:38:22.237884] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:34.019 [2024-11-15 09:38:22.237941] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:34.019 00:19:34.019 [2024-11-15 09:38:22.237973] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:35.401 ************************************ 00:19:35.401 END TEST bdev_hello_world 00:19:35.401 ************************************ 00:19:35.401 00:19:35.401 real 0m2.542s 00:19:35.401 user 0m2.044s 00:19:35.401 sys 0m0.358s 00:19:35.401 09:38:23 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:35.401 09:38:23 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:35.401 09:38:23 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:35.401 09:38:23 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:35.401 09:38:23 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:35.401 09:38:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:35.401 ************************************ 00:19:35.401 START TEST bdev_bounds 00:19:35.401 ************************************ 00:19:35.401 09:38:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:19:35.401 Process bdevio pid: 90565 00:19:35.401 09:38:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90565 00:19:35.401 09:38:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:35.401 09:38:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:35.401 09:38:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90565' 00:19:35.401 09:38:23 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90565 00:19:35.401 09:38:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 90565 ']' 00:19:35.401 09:38:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.401 09:38:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:35.401 09:38:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.401 09:38:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:35.401 09:38:23 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:35.660 [2024-11-15 09:38:23.919662] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:19:35.660 [2024-11-15 09:38:23.919874] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90565 ] 00:19:35.660 [2024-11-15 09:38:24.094344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:35.920 [2024-11-15 09:38:24.237019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.920 [2024-11-15 09:38:24.237091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.920 [2024-11-15 09:38:24.237134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.488 09:38:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:36.488 09:38:24 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:19:36.488 09:38:24 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:36.488 I/O targets: 00:19:36.488 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:36.488 00:19:36.488 00:19:36.488 CUnit - A unit testing framework for C - Version 2.1-3 00:19:36.488 http://cunit.sourceforge.net/ 00:19:36.488 00:19:36.488 00:19:36.488 Suite: bdevio tests on: raid5f 00:19:36.488 Test: blockdev write read block ...passed 00:19:36.488 Test: blockdev write zeroes read block ...passed 00:19:36.747 Test: blockdev write zeroes read no split ...passed 00:19:36.747 Test: blockdev write zeroes read split ...passed 00:19:36.747 Test: blockdev write zeroes read split partial ...passed 00:19:36.747 Test: blockdev reset ...passed 00:19:36.747 Test: blockdev write read 8 blocks ...passed 00:19:36.747 Test: blockdev write read size > 128k ...passed 00:19:36.747 Test: blockdev write read invalid size ...passed 00:19:36.747 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:36.747 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:36.747 Test: blockdev write read max offset ...passed 00:19:36.747 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:36.747 Test: blockdev writev readv 8 blocks ...passed 00:19:37.008 Test: blockdev writev readv 30 x 1block ...passed 00:19:37.008 Test: blockdev writev readv block ...passed 00:19:37.008 Test: blockdev writev readv size > 128k ...passed 00:19:37.008 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:37.008 Test: blockdev comparev and writev ...passed 00:19:37.008 Test: blockdev nvme passthru rw ...passed 00:19:37.008 Test: blockdev nvme passthru vendor specific ...passed 00:19:37.008 Test: blockdev nvme admin passthru ...passed 00:19:37.008 Test: blockdev copy ...passed 00:19:37.008 00:19:37.008 Run Summary: Type Total Ran Passed Failed Inactive 00:19:37.008 suites 1 1 n/a 0 0 00:19:37.008 tests 23 23 23 0 0 00:19:37.008 asserts 130 130 130 0 n/a 00:19:37.008 00:19:37.008 Elapsed time = 0.672 seconds 00:19:37.008 0 00:19:37.008 09:38:25 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90565 00:19:37.008 09:38:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 90565 ']' 00:19:37.008 09:38:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 90565 00:19:37.008 09:38:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:19:37.008 09:38:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:37.008 09:38:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90565 00:19:37.008 09:38:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:37.008 09:38:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:37.008 09:38:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90565' 00:19:37.008 killing process with pid 90565 00:19:37.008 09:38:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@971 -- # kill 90565 00:19:37.008 09:38:25 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@976 -- # wait 90565 00:19:38.388 09:38:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:38.388 00:19:38.388 real 0m2.980s 00:19:38.388 user 0m7.327s 00:19:38.388 sys 0m0.474s 00:19:38.388 09:38:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:38.388 09:38:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:38.388 ************************************ 00:19:38.388 END TEST bdev_bounds 00:19:38.388 ************************************ 00:19:38.649 09:38:26 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:38.649 09:38:26 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:38.649 09:38:26 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:38.649 09:38:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:38.649 ************************************ 00:19:38.649 START TEST bdev_nbd 00:19:38.649 ************************************ 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90633 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90633 /var/tmp/spdk-nbd.sock 00:19:38.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 90633 ']' 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:38.649 09:38:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:38.649 [2024-11-15 09:38:26.993023] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:19:38.649 [2024-11-15 09:38:26.993151] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.909 [2024-11-15 09:38:27.173234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.909 [2024-11-15 09:38:27.318661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.848 09:38:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:39.848 09:38:27 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:19:39.848 09:38:27 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:39.848 09:38:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:39.848 09:38:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:39.848 09:38:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:39.849 09:38:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:39.849 09:38:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:39.849 09:38:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:39.849 09:38:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:39.849 09:38:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:39.849 09:38:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:39.849 09:38:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:39.849 09:38:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:39.849 09:38:27 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:39.849 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:39.849 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:39.849 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:39.849 09:38:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:39.849 09:38:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:19:39.849 09:38:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:39.849 09:38:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:39.849 09:38:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:39.849 09:38:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:19:39.849 09:38:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:39.849 09:38:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:39.849 09:38:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:39.849 1+0 records in 00:19:39.849 1+0 records out 00:19:39.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434123 s, 9.4 MB/s 00:19:39.849 09:38:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:39.849 09:38:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:19:39.849 09:38:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:39.849 09:38:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:39.849 09:38:28 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:19:39.849 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:39.849 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:39.849 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:40.108 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:40.109 { 00:19:40.109 "nbd_device": "/dev/nbd0", 00:19:40.109 "bdev_name": "raid5f" 00:19:40.109 } 00:19:40.109 ]' 00:19:40.109 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:40.109 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:40.109 { 00:19:40.109 "nbd_device": "/dev/nbd0", 00:19:40.109 "bdev_name": "raid5f" 00:19:40.109 } 00:19:40.109 ]' 00:19:40.109 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:40.109 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:40.109 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:40.109 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:40.109 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:40.109 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:40.109 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:40.109 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:40.369 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:40.369 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:40.369 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:40.369 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:40.369 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:40.369 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:40.369 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:40.369 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:40.369 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:40.369 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:40.369 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:40.629 09:38:28 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:40.889 /dev/nbd0 00:19:40.889 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:40.889 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:40.889 09:38:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:40.889 09:38:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:19:40.889 09:38:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:40.889 09:38:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:40.889 09:38:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:40.889 09:38:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:19:40.889 09:38:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:40.889 09:38:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:40.889 09:38:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:40.889 1+0 records in 00:19:40.889 1+0 records out 00:19:40.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393853 s, 10.4 MB/s 00:19:40.889 09:38:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.889 09:38:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:19:40.889 09:38:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.889 09:38:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:40.889 09:38:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:19:40.889 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:40.889 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:40.889 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:40.889 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:40.889 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:41.148 { 00:19:41.148 "nbd_device": "/dev/nbd0", 00:19:41.148 "bdev_name": "raid5f" 00:19:41.148 } 00:19:41.148 ]' 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:41.148 { 00:19:41.148 "nbd_device": "/dev/nbd0", 00:19:41.148 "bdev_name": "raid5f" 00:19:41.148 } 00:19:41.148 ]' 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:41.148 256+0 records in 00:19:41.148 256+0 records out 00:19:41.148 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143288 s, 73.2 MB/s 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:41.148 256+0 records in 00:19:41.148 256+0 records out 00:19:41.148 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308281 s, 34.0 MB/s 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:41.148 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:41.149 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:41.149 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:41.149 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:41.149 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:41.149 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:41.149 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:41.149 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:41.149 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:41.408 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:41.408 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:41.408 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:41.408 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:41.408 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:41.408 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:41.408 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:41.408 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:41.408 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:41.408 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:41.408 09:38:29 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:41.668 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:41.668 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:41.668 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:41.668 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:41.668 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:41.668 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:41.668 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:41.668 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:41.668 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:41.668 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:41.668 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:41.668 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:41.668 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:41.668 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:41.668 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:41.668 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:41.928 malloc_lvol_verify 00:19:41.928 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:42.187 3241600f-63ba-4e9f-8d29-3229145fe286 00:19:42.187 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:42.474 bb6e3ed3-ca23-4930-97f1-2137ac9cc5b8 00:19:42.474 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:42.474 /dev/nbd0 00:19:42.474 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:42.474 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:42.475 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:42.475 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:42.475 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:42.475 mke2fs 1.47.0 (5-Feb-2023) 00:19:42.475 Discarding device blocks: 0/4096 done 00:19:42.475 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:42.475 00:19:42.475 Allocating group tables: 0/1 done 00:19:42.475 Writing inode tables: 0/1 done 00:19:42.475 Creating journal (1024 blocks): done 00:19:42.475 Writing superblocks and filesystem accounting information: 0/1 done 00:19:42.475 00:19:42.475 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:42.475 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:42.475 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:42.475 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:42.475 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:42.475 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:42.475 09:38:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:42.735 09:38:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:42.735 09:38:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:42.735 09:38:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:42.735 09:38:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:42.735 09:38:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:42.735 09:38:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:42.735 09:38:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:42.735 09:38:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:42.735 09:38:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90633 00:19:42.735 09:38:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 90633 ']' 00:19:42.735 09:38:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 90633 00:19:42.735 09:38:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:19:42.735 09:38:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:42.735 09:38:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90633 00:19:42.735 09:38:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:42.735 09:38:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:42.735 09:38:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90633' 00:19:42.735 killing process with pid 90633 00:19:42.735 09:38:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@971 -- # kill 90633 00:19:42.735 09:38:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@976 -- # wait 90633 00:19:44.645 09:38:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:44.645 00:19:44.645 real 0m5.900s 00:19:44.645 user 0m7.757s 00:19:44.645 sys 0m1.410s 00:19:44.645 09:38:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:44.645 09:38:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:44.645 ************************************ 00:19:44.645 END TEST bdev_nbd 00:19:44.645 ************************************ 00:19:44.645 09:38:32 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:19:44.645 09:38:32 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:19:44.645 09:38:32 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:19:44.645 09:38:32 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:19:44.645 09:38:32 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:44.645 09:38:32 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:44.645 09:38:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:44.645 ************************************ 00:19:44.645 START TEST bdev_fio 00:19:44.645 ************************************ 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:44.645 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:44.645 09:38:32 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:44.645 ************************************ 00:19:44.645 START TEST bdev_fio_rw_verify 00:19:44.645 ************************************ 00:19:44.645 09:38:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:44.645 09:38:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:44.645 09:38:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:19:44.645 09:38:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:44.645 09:38:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:19:44.645 09:38:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:44.645 09:38:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:19:44.645 09:38:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:19:44.645 09:38:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:19:44.645 09:38:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:44.645 09:38:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:19:44.645 09:38:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:19:44.645 09:38:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:44.645 09:38:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:44.645 09:38:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:19:44.645 09:38:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:44.645 09:38:33 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:44.905 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:44.905 fio-3.35 00:19:44.905 Starting 1 thread 00:19:57.139 00:19:57.139 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90836: Fri Nov 15 09:38:44 2024 00:19:57.139 read: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(450MiB/10001msec) 00:19:57.139 slat (nsec): min=16974, max=53780, avg=20318.43, stdev=2586.06 00:19:57.139 clat (usec): min=10, max=317, avg=138.58, stdev=48.86 00:19:57.139 lat (usec): min=30, max=348, avg=158.90, stdev=49.34 00:19:57.139 clat percentiles (usec): 00:19:57.139 | 50.000th=[ 141], 99.000th=[ 243], 99.900th=[ 273], 99.990th=[ 306], 00:19:57.139 | 99.999th=[ 314] 00:19:57.139 write: IOPS=12.1k, BW=47.4MiB/s (49.7MB/s)(468MiB/9862msec); 0 zone resets 00:19:57.139 slat (usec): min=7, max=983, avg=17.47, stdev= 4.78 00:19:57.139 clat (usec): min=61, max=1390, avg=319.42, stdev=46.64 00:19:57.139 lat (usec): min=76, max=1408, avg=336.89, stdev=47.89 00:19:57.139 clat percentiles (usec): 00:19:57.139 | 50.000th=[ 322], 99.000th=[ 433], 99.900th=[ 545], 99.990th=[ 979], 00:19:57.139 | 99.999th=[ 1369] 00:19:57.139 bw ( KiB/s): min=45928, max=50600, per=98.88%, avg=48022.32, stdev=1509.32, samples=19 00:19:57.139 iops : min=11482, max=12650, avg=12005.58, stdev=377.33, samples=19 00:19:57.140 lat (usec) : 20=0.01%, 50=0.01%, 100=12.43%, 250=39.56%, 500=47.94% 00:19:57.140 lat (usec) : 750=0.05%, 1000=0.02% 00:19:57.140 lat (msec) : 2=0.01% 00:19:57.140 cpu : usr=98.95%, sys=0.40%, ctx=18, majf=0, minf=9557 00:19:57.140 IO depths : 1=7.7%, 2=20.0%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:57.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.140 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:57.140 issued rwts: total=115211,119735,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:57.140 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:57.140 00:19:57.140 Run status group 0 (all jobs): 00:19:57.140 READ: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=450MiB (472MB), run=10001-10001msec 00:19:57.140 WRITE: bw=47.4MiB/s (49.7MB/s), 47.4MiB/s-47.4MiB/s (49.7MB/s-49.7MB/s), io=468MiB (490MB), run=9862-9862msec 00:19:57.399 ----------------------------------------------------- 00:19:57.399 Suppressions used: 00:19:57.399 count bytes template 00:19:57.399 1 7 /usr/src/fio/parse.c 00:19:57.399 939 90144 /usr/src/fio/iolog.c 00:19:57.399 1 8 libtcmalloc_minimal.so 00:19:57.399 1 904 libcrypto.so 00:19:57.399 ----------------------------------------------------- 00:19:57.399 00:19:57.399 00:19:57.399 real 0m12.847s 00:19:57.399 user 0m12.976s 00:19:57.399 sys 0m0.733s 00:19:57.399 09:38:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:57.399 09:38:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:57.399 ************************************ 00:19:57.399 END TEST bdev_fio_rw_verify 00:19:57.399 ************************************ 00:19:57.659 09:38:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:57.659 09:38:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:57.659 09:38:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:57.659 09:38:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:57.659 09:38:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:19:57.659 09:38:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:19:57.659 09:38:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:19:57.659 09:38:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:19:57.659 09:38:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:57.659 09:38:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:19:57.659 09:38:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:19:57.659 09:38:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:57.659 09:38:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:19:57.659 09:38:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:19:57.659 09:38:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:19:57.659 09:38:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:19:57.659 09:38:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "50240bc1-98a9-4856-be69-c57a8cf5096e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "50240bc1-98a9-4856-be69-c57a8cf5096e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "50240bc1-98a9-4856-be69-c57a8cf5096e",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "1628e121-c62d-44e9-83d5-4f71b498b5db",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "efb198c1-0e94-43a4-bc0c-d4a441d77e1b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "7fb033fc-96b2-4b48-80aa-f32aa10356cc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:57.659 09:38:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:57.659 09:38:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:57.659 09:38:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:57.659 09:38:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:57.659 /home/vagrant/spdk_repo/spdk 00:19:57.659 09:38:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:57.659 09:38:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:57.660 00:19:57.660 real 0m13.125s 00:19:57.660 user 0m13.093s 00:19:57.660 sys 0m0.875s 00:19:57.660 09:38:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:57.660 09:38:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:57.660 ************************************ 00:19:57.660 END TEST bdev_fio 00:19:57.660 ************************************ 00:19:57.660 09:38:46 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:57.660 09:38:46 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:57.660 09:38:46 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:19:57.660 09:38:46 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:57.660 09:38:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:57.660 ************************************ 00:19:57.660 START TEST bdev_verify 00:19:57.660 ************************************ 00:19:57.660 09:38:46 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:57.919 [2024-11-15 09:38:46.151011] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:19:57.919 [2024-11-15 09:38:46.151143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91001 ] 00:19:57.919 [2024-11-15 09:38:46.334481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:58.179 [2024-11-15 09:38:46.447810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.179 [2024-11-15 09:38:46.447839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.749 Running I/O for 5 seconds... 00:20:00.627 9932.00 IOPS, 38.80 MiB/s [2024-11-15T09:38:50.031Z] 10103.50 IOPS, 39.47 MiB/s [2024-11-15T09:38:51.413Z] 11637.67 IOPS, 45.46 MiB/s [2024-11-15T09:38:52.354Z] 12879.00 IOPS, 50.31 MiB/s [2024-11-15T09:38:52.354Z] 12793.80 IOPS, 49.98 MiB/s 00:20:03.891 Latency(us) 00:20:03.891 [2024-11-15T09:38:52.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.891 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:03.891 Verification LBA range: start 0x0 length 0x2000 00:20:03.891 raid5f : 5.02 6129.52 23.94 0.00 0.00 31379.45 1552.54 34113.06 00:20:03.891 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:03.891 Verification LBA range: start 0x2000 length 0x2000 00:20:03.891 raid5f : 5.02 6626.96 25.89 0.00 0.00 29030.44 203.91 37547.26 00:20:03.891 [2024-11-15T09:38:52.354Z] =================================================================================================================== 00:20:03.891 [2024-11-15T09:38:52.354Z] Total : 12756.49 49.83 0.00 0.00 30159.31 203.91 37547.26 00:20:05.274 00:20:05.274 real 0m7.298s 00:20:05.274 user 0m13.467s 00:20:05.274 sys 0m0.294s 00:20:05.274 09:38:53 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:05.274 09:38:53 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:05.274 ************************************ 00:20:05.274 END TEST bdev_verify 00:20:05.274 ************************************ 00:20:05.274 09:38:53 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:05.274 09:38:53 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:20:05.274 09:38:53 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:05.274 09:38:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:05.274 ************************************ 00:20:05.274 START TEST bdev_verify_big_io 00:20:05.274 ************************************ 00:20:05.274 09:38:53 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:05.274 [2024-11-15 09:38:53.503796] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:20:05.274 [2024-11-15 09:38:53.503904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91098 ] 00:20:05.274 [2024-11-15 09:38:53.677887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:05.534 [2024-11-15 09:38:53.787406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.534 [2024-11-15 09:38:53.787440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.103 Running I/O for 5 seconds... 00:20:07.983 756.00 IOPS, 47.25 MiB/s [2024-11-15T09:38:57.851Z] 760.00 IOPS, 47.50 MiB/s [2024-11-15T09:38:58.789Z] 760.33 IOPS, 47.52 MiB/s [2024-11-15T09:38:59.729Z] 761.50 IOPS, 47.59 MiB/s [2024-11-15T09:38:59.729Z] 761.60 IOPS, 47.60 MiB/s 00:20:11.267 Latency(us) 00:20:11.267 [2024-11-15T09:38:59.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.267 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:11.267 Verification LBA range: start 0x0 length 0x200 00:20:11.267 raid5f : 5.31 346.55 21.66 0.00 0.00 9034844.67 227.16 419430.40 00:20:11.267 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:11.267 Verification LBA range: start 0x200 length 0x200 00:20:11.267 raid5f : 5.30 407.18 25.45 0.00 0.00 7840719.48 287.97 349830.60 00:20:11.267 [2024-11-15T09:38:59.730Z] =================================================================================================================== 00:20:11.267 [2024-11-15T09:38:59.730Z] Total : 753.73 47.11 0.00 0.00 8390154.43 227.16 419430.40 00:20:12.648 00:20:12.648 real 0m7.587s 00:20:12.648 user 0m14.103s 00:20:12.648 sys 0m0.252s 00:20:12.648 09:39:01 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:12.648 09:39:01 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:12.648 ************************************ 00:20:12.648 END TEST bdev_verify_big_io 00:20:12.648 ************************************ 00:20:12.648 09:39:01 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:12.648 09:39:01 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:20:12.648 09:39:01 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:12.648 09:39:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:12.648 ************************************ 00:20:12.648 START TEST bdev_write_zeroes 00:20:12.648 ************************************ 00:20:12.648 09:39:01 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:12.908 [2024-11-15 09:39:01.157361] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:20:12.908 [2024-11-15 09:39:01.157457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91192 ] 00:20:12.908 [2024-11-15 09:39:01.331778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.168 [2024-11-15 09:39:01.443120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.737 Running I/O for 1 seconds... 00:20:14.677 29199.00 IOPS, 114.06 MiB/s 00:20:14.677 Latency(us) 00:20:14.677 [2024-11-15T09:39:03.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.677 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:14.678 raid5f : 1.01 29166.87 113.93 0.00 0.00 4375.33 1359.37 5838.14 00:20:14.678 [2024-11-15T09:39:03.141Z] =================================================================================================================== 00:20:14.678 [2024-11-15T09:39:03.141Z] Total : 29166.87 113.93 0.00 0.00 4375.33 1359.37 5838.14 00:20:16.059 00:20:16.059 real 0m3.214s 00:20:16.059 user 0m2.843s 00:20:16.059 sys 0m0.246s 00:20:16.059 09:39:04 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:16.059 09:39:04 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:16.059 ************************************ 00:20:16.059 END TEST bdev_write_zeroes 00:20:16.059 ************************************ 00:20:16.059 09:39:04 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:16.059 09:39:04 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:20:16.059 09:39:04 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:16.059 09:39:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:16.059 ************************************ 00:20:16.059 START TEST bdev_json_nonenclosed 00:20:16.059 ************************************ 00:20:16.059 09:39:04 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:16.059 [2024-11-15 09:39:04.449110] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:20:16.059 [2024-11-15 09:39:04.449220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91245 ] 00:20:16.319 [2024-11-15 09:39:04.629327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.319 [2024-11-15 09:39:04.739529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.319 [2024-11-15 09:39:04.739610] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:16.319 [2024-11-15 09:39:04.739634] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:16.319 [2024-11-15 09:39:04.739644] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:16.578 00:20:16.578 real 0m0.619s 00:20:16.578 user 0m0.397s 00:20:16.578 sys 0m0.117s 00:20:16.578 09:39:04 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:16.578 09:39:04 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:16.578 ************************************ 00:20:16.578 END TEST bdev_json_nonenclosed 00:20:16.578 ************************************ 00:20:16.578 09:39:05 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:16.578 09:39:05 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:20:16.578 09:39:05 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:16.578 09:39:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:16.839 ************************************ 00:20:16.839 START TEST bdev_json_nonarray 00:20:16.839 ************************************ 00:20:16.839 09:39:05 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:16.839 [2024-11-15 09:39:05.139992] Starting SPDK v25.01-pre git sha1 318515b44 / DPDK 24.03.0 initialization... 00:20:16.839 [2024-11-15 09:39:05.140105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91276 ] 00:20:17.099 [2024-11-15 09:39:05.319144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.099 [2024-11-15 09:39:05.427993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.099 [2024-11-15 09:39:05.428099] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:17.099 [2024-11-15 09:39:05.428116] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:17.099 [2024-11-15 09:39:05.428150] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:17.359 00:20:17.359 real 0m0.625s 00:20:17.359 user 0m0.383s 00:20:17.359 sys 0m0.137s 00:20:17.359 09:39:05 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:17.359 09:39:05 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:17.359 ************************************ 00:20:17.359 END TEST bdev_json_nonarray 00:20:17.359 ************************************ 00:20:17.359 09:39:05 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:20:17.359 09:39:05 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:20:17.359 09:39:05 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:20:17.359 09:39:05 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:20:17.359 09:39:05 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:20:17.359 09:39:05 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:17.359 09:39:05 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:17.359 09:39:05 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:20:17.359 09:39:05 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:20:17.359 09:39:05 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:20:17.359 09:39:05 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:20:17.359 00:20:17.359 real 0m49.395s 00:20:17.359 user 1m6.143s 00:20:17.359 sys 0m5.462s 00:20:17.359 09:39:05 blockdev_raid5f -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:17.359 09:39:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:17.359 ************************************ 00:20:17.359 END TEST blockdev_raid5f 00:20:17.359 ************************************ 00:20:17.359 09:39:05 -- spdk/autotest.sh@194 -- # uname -s 00:20:17.359 09:39:05 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:20:17.359 09:39:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:17.359 09:39:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:17.359 09:39:05 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:20:17.359 09:39:05 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:20:17.360 09:39:05 -- spdk/autotest.sh@256 -- # timing_exit lib 00:20:17.360 09:39:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:17.360 09:39:05 -- common/autotest_common.sh@10 -- # set +x 00:20:17.620 09:39:05 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:20:17.620 09:39:05 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:20:17.620 09:39:05 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:20:17.620 09:39:05 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:20:17.620 09:39:05 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:17.620 09:39:05 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:17.620 09:39:05 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:20:17.620 09:39:05 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:20:17.620 09:39:05 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:20:17.620 09:39:05 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:17.620 09:39:05 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:17.620 09:39:05 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:17.620 09:39:05 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:20:17.620 09:39:05 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:17.620 09:39:05 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:20:17.620 09:39:05 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:17.620 09:39:05 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:17.620 09:39:05 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:20:17.620 09:39:05 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:20:17.620 09:39:05 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:20:17.620 09:39:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:17.620 09:39:05 -- common/autotest_common.sh@10 -- # set +x 00:20:17.620 09:39:05 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:20:17.620 09:39:05 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:20:17.620 09:39:05 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:20:17.620 09:39:05 -- common/autotest_common.sh@10 -- # set +x 00:20:20.185 INFO: APP EXITING 00:20:20.185 INFO: killing all VMs 00:20:20.185 INFO: killing vhost app 00:20:20.185 INFO: EXIT DONE 00:20:20.185 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:20.185 Waiting for block devices as requested 00:20:20.445 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:20.445 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:21.385 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:21.385 Cleaning 00:20:21.385 Removing: /var/run/dpdk/spdk0/config 00:20:21.385 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:21.385 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:21.385 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:21.385 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:21.385 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:21.385 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:21.385 Removing: /dev/shm/spdk_tgt_trace.pid56975 00:20:21.385 Removing: /var/run/dpdk/spdk0 00:20:21.385 Removing: /var/run/dpdk/spdk_pid56723 00:20:21.385 Removing: /var/run/dpdk/spdk_pid56975 00:20:21.385 Removing: /var/run/dpdk/spdk_pid57215 00:20:21.385 Removing: /var/run/dpdk/spdk_pid57330 00:20:21.385 Removing: /var/run/dpdk/spdk_pid57397 00:20:21.385 Removing: /var/run/dpdk/spdk_pid57536 00:20:21.385 Removing: /var/run/dpdk/spdk_pid57560 00:20:21.385 Removing: /var/run/dpdk/spdk_pid57775 00:20:21.385 Removing: /var/run/dpdk/spdk_pid57893 00:20:21.385 Removing: /var/run/dpdk/spdk_pid58006 00:20:21.385 Removing: /var/run/dpdk/spdk_pid58139 00:20:21.385 Removing: /var/run/dpdk/spdk_pid58258 00:20:21.385 Removing: /var/run/dpdk/spdk_pid58303 00:20:21.385 Removing: /var/run/dpdk/spdk_pid58335 00:20:21.385 Removing: /var/run/dpdk/spdk_pid58410 00:20:21.385 Removing: /var/run/dpdk/spdk_pid58538 00:20:21.385 Removing: /var/run/dpdk/spdk_pid59004 00:20:21.385 Removing: /var/run/dpdk/spdk_pid59079 00:20:21.385 Removing: /var/run/dpdk/spdk_pid59166 00:20:21.385 Removing: /var/run/dpdk/spdk_pid59183 00:20:21.385 Removing: /var/run/dpdk/spdk_pid59337 00:20:21.385 Removing: /var/run/dpdk/spdk_pid59353 00:20:21.385 Removing: /var/run/dpdk/spdk_pid59517 00:20:21.385 Removing: /var/run/dpdk/spdk_pid59539 00:20:21.385 Removing: /var/run/dpdk/spdk_pid59614 00:20:21.385 Removing: /var/run/dpdk/spdk_pid59633 00:20:21.385 Removing: /var/run/dpdk/spdk_pid59707 00:20:21.385 Removing: /var/run/dpdk/spdk_pid59731 00:20:21.385 Removing: /var/run/dpdk/spdk_pid59950 00:20:21.385 Removing: /var/run/dpdk/spdk_pid59981 00:20:21.385 Removing: /var/run/dpdk/spdk_pid60070 00:20:21.385 Removing: /var/run/dpdk/spdk_pid61463 00:20:21.385 Removing: /var/run/dpdk/spdk_pid61680 00:20:21.385 Removing: /var/run/dpdk/spdk_pid61820 00:20:21.385 Removing: /var/run/dpdk/spdk_pid62480 00:20:21.385 Removing: /var/run/dpdk/spdk_pid62697 00:20:21.644 Removing: /var/run/dpdk/spdk_pid62837 00:20:21.644 Removing: /var/run/dpdk/spdk_pid63491 00:20:21.644 Removing: /var/run/dpdk/spdk_pid63823 00:20:21.644 Removing: /var/run/dpdk/spdk_pid63968 00:20:21.644 Removing: /var/run/dpdk/spdk_pid65370 00:20:21.644 Removing: /var/run/dpdk/spdk_pid65629 00:20:21.644 Removing: /var/run/dpdk/spdk_pid65775 00:20:21.644 Removing: /var/run/dpdk/spdk_pid67177 00:20:21.644 Removing: /var/run/dpdk/spdk_pid67432 00:20:21.644 Removing: /var/run/dpdk/spdk_pid67579 00:20:21.644 Removing: /var/run/dpdk/spdk_pid68981 00:20:21.644 Removing: /var/run/dpdk/spdk_pid69432 00:20:21.644 Removing: /var/run/dpdk/spdk_pid69578 00:20:21.644 Removing: /var/run/dpdk/spdk_pid71090 00:20:21.644 Removing: /var/run/dpdk/spdk_pid71355 00:20:21.644 Removing: /var/run/dpdk/spdk_pid71507 00:20:21.644 Removing: /var/run/dpdk/spdk_pid73004 00:20:21.644 Removing: /var/run/dpdk/spdk_pid73268 00:20:21.644 Removing: /var/run/dpdk/spdk_pid73420 00:20:21.644 Removing: /var/run/dpdk/spdk_pid74916 00:20:21.644 Removing: /var/run/dpdk/spdk_pid75409 00:20:21.644 Removing: /var/run/dpdk/spdk_pid75560 00:20:21.644 Removing: /var/run/dpdk/spdk_pid75710 00:20:21.644 Removing: /var/run/dpdk/spdk_pid76141 00:20:21.644 Removing: /var/run/dpdk/spdk_pid76877 00:20:21.644 Removing: /var/run/dpdk/spdk_pid77255 00:20:21.645 Removing: /var/run/dpdk/spdk_pid77944 00:20:21.645 Removing: /var/run/dpdk/spdk_pid78391 00:20:21.645 Removing: /var/run/dpdk/spdk_pid79156 00:20:21.645 Removing: /var/run/dpdk/spdk_pid79570 00:20:21.645 Removing: /var/run/dpdk/spdk_pid81553 00:20:21.645 Removing: /var/run/dpdk/spdk_pid82006 00:20:21.645 Removing: /var/run/dpdk/spdk_pid82446 00:20:21.645 Removing: /var/run/dpdk/spdk_pid84542 00:20:21.645 Removing: /var/run/dpdk/spdk_pid85033 00:20:21.645 Removing: /var/run/dpdk/spdk_pid85556 00:20:21.645 Removing: /var/run/dpdk/spdk_pid86619 00:20:21.645 Removing: /var/run/dpdk/spdk_pid86947 00:20:21.645 Removing: /var/run/dpdk/spdk_pid87891 00:20:21.645 Removing: /var/run/dpdk/spdk_pid88214 00:20:21.645 Removing: /var/run/dpdk/spdk_pid89159 00:20:21.645 Removing: /var/run/dpdk/spdk_pid89482 00:20:21.645 Removing: /var/run/dpdk/spdk_pid90164 00:20:21.645 Removing: /var/run/dpdk/spdk_pid90446 00:20:21.645 Removing: /var/run/dpdk/spdk_pid90513 00:20:21.645 Removing: /var/run/dpdk/spdk_pid90565 00:20:21.645 Removing: /var/run/dpdk/spdk_pid90821 00:20:21.645 Removing: /var/run/dpdk/spdk_pid91001 00:20:21.645 Removing: /var/run/dpdk/spdk_pid91098 00:20:21.645 Removing: /var/run/dpdk/spdk_pid91192 00:20:21.645 Removing: /var/run/dpdk/spdk_pid91245 00:20:21.645 Removing: /var/run/dpdk/spdk_pid91276 00:20:21.645 Clean 00:20:21.904 09:39:10 -- common/autotest_common.sh@1451 -- # return 0 00:20:21.904 09:39:10 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:20:21.904 09:39:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:21.904 09:39:10 -- common/autotest_common.sh@10 -- # set +x 00:20:21.904 09:39:10 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:20:21.905 09:39:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:21.905 09:39:10 -- common/autotest_common.sh@10 -- # set +x 00:20:21.905 09:39:10 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:21.905 09:39:10 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:21.905 09:39:10 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:21.905 09:39:10 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:20:21.905 09:39:10 -- spdk/autotest.sh@394 -- # hostname 00:20:21.905 09:39:10 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:22.164 geninfo: WARNING: invalid characters removed from testname! 00:20:44.137 09:39:32 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:46.673 09:39:34 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:48.584 09:39:36 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:50.495 09:39:38 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:52.404 09:39:40 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:54.945 09:39:42 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:56.856 09:39:44 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:56.856 09:39:44 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:56.856 09:39:44 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:56.856 09:39:44 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:56.856 09:39:44 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:56.856 09:39:44 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:56.856 + [[ -n 5429 ]] 00:20:56.856 + sudo kill 5429 00:20:56.865 [Pipeline] } 00:20:56.880 [Pipeline] // timeout 00:20:56.886 [Pipeline] } 00:20:56.902 [Pipeline] // stage 00:20:56.907 [Pipeline] } 00:20:56.922 [Pipeline] // catchError 00:20:56.931 [Pipeline] stage 00:20:56.933 [Pipeline] { (Stop VM) 00:20:56.945 [Pipeline] sh 00:20:57.247 + vagrant halt 00:20:59.784 ==> default: Halting domain... 00:21:07.922 [Pipeline] sh 00:21:08.205 + vagrant destroy -f 00:21:10.747 ==> default: Removing domain... 00:21:10.760 [Pipeline] sh 00:21:11.048 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:21:11.057 [Pipeline] } 00:21:11.074 [Pipeline] // stage 00:21:11.079 [Pipeline] } 00:21:11.096 [Pipeline] // dir 00:21:11.102 [Pipeline] } 00:21:11.116 [Pipeline] // wrap 00:21:11.123 [Pipeline] } 00:21:11.136 [Pipeline] // catchError 00:21:11.146 [Pipeline] stage 00:21:11.148 [Pipeline] { (Epilogue) 00:21:11.161 [Pipeline] sh 00:21:11.446 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:15.655 [Pipeline] catchError 00:21:15.657 [Pipeline] { 00:21:15.669 [Pipeline] sh 00:21:15.950 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:15.950 Artifacts sizes are good 00:21:15.959 [Pipeline] } 00:21:15.973 [Pipeline] // catchError 00:21:15.984 [Pipeline] archiveArtifacts 00:21:15.992 Archiving artifacts 00:21:16.137 [Pipeline] cleanWs 00:21:16.149 [WS-CLEANUP] Deleting project workspace... 00:21:16.149 [WS-CLEANUP] Deferred wipeout is used... 00:21:16.156 [WS-CLEANUP] done 00:21:16.158 [Pipeline] } 00:21:16.174 [Pipeline] // stage 00:21:16.179 [Pipeline] } 00:21:16.194 [Pipeline] // node 00:21:16.199 [Pipeline] End of Pipeline 00:21:16.243 Finished: SUCCESS